Skip to main content
Glama

Server Details

BGG MCP provides access to the BoardGameGeek API through the Model Context Protocol, enabling retr…

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
kkjdaniel/bgg-mcp
GitHub Stars
21

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

10 tools
bgg-collectionCInspect

Query a user's board game collection on BoardGameGeek (BGG). Returns all matching games by default with basic info (name, ID, rating, plays, status). Use the filter parameters to narrow results.

ParametersJSON Schema
NameRequiredDescriptionDefault
ownedNo
ratedNo
playedNo
subtypeNo
fortradeNo
haspartsNo
maxplaysNo
minplaysNo
usernameYesBGG username. When the user refers to themselves, use 'SELF'.
wishlistNo
maxratingNo
minratingNo
wanttobuyNo
preorderedNo
wanttoplayNo
maxbggratingNo
minbggratingNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully indicates default behavior (returns all matching games) and enumerates return fields (name, ID, rating, plays, status), providing useful output transparency. However, it lacks operational details such as rate limiting, caching behavior, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three concise sentences totaling under 40 words, with the purpose clearly stated in the opening sentence. While efficiently structured and front-loaded, the third sentence ('Use the filter parameters...') is too vague to fully earn its place given the complexity of the filtering options available.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 17 parameters with minimal schema documentation and no output schema, the description is incomplete regarding parameter semantics. While it partially compensates by describing the return values (name, ID, rating, plays, status), it leaves the majority of the filtering capability unexplained and provides no guidance on error conditions or edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 6%, leaving 16 of 17 parameters undocumented in the schema. The description mentions 'filter parameters' broadly but fails to explain the semantics of critical filters like owned, wishlist, rated, or the rating range parameters (minrating, maxrating). Given the severe schema coverage gap, the description provides inadequate compensatory documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Query') and resource ('user's board game collection'), clearly distinguishing it from sibling tools like bgg-search (general game search) and bgg-user (user profile data). However, it does not explicitly differentiate from bgg-details, which retrieves specific game information rather than collection lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'Use the filter parameters to narrow results,' which hints at functionality but provides no explicit guidance on when to use this tool versus alternatives like bgg-search or bgg-user. It fails to specify prerequisites (such as requiring a valid public BGG username) or scenarios where this tool would be inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bgg-detailsAInspect

Get detailed information about board games on BoardGameGeek (BGG) including description, mechanics, categories, player count, playtime, complexity, and ratings. Use this tool to deep dive into games found via other tools (e.g. after getting collection results or search results that only return basic info). Use 'name' for a single game lookup by name, 'id' for a single game lookup by BGG ID, or 'ids' to fetch multiple games at once (up to 20). Only provide one of these parameters.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoThe BoardGameGeek ID of a single board game.
idsNoArray of BoardGameGeek IDs for fetching multiple games (max 20).
nameNoThe name of the board game to look up.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the mutual exclusivity constraint (crucial behavioral rule), the batch limit of 20 games, and enumerates return fields (description, mechanics, etc.) to compensate for the missing output schema. Minor gap on rate limits or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) capability and return fields, (2) contextual usage vs siblings, (3) parameter selection guidance, (4) mutual exclusivity constraint. Information is front-loaded and logically sequenced from 'what' to 'when' to 'how.'

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter lookup tool with 100% schema coverage and no output schema, the description adequately compensates by listing return fields and explaining the single-vs-batch lookup patterns. Would benefit from explicit mention of error handling or rate limiting to achieve a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While input schema has 100% description coverage (baseline 3), the description adds significant value by clarifying the mutual exclusivity rule ('Only provide one') and mapping usage patterns ('Use 'name' for a single game lookup... 'ids' to fetch multiple games'). This contextual guidance prevents invocation errors.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get detailed information') and resource ('board games on BoardGameGeek'), listing specific data fields returned (mechanics, categories, ratings, etc.). It clearly distinguishes from siblings by positioning this as a 'deep dive' tool to use after 'search results that only return basic info.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('to deep dive into games found via other tools'), gives concrete examples ('after getting collection results or search results'), and mandates critical constraints ('Only provide one of these parameters'). It effectively maps the workflow across the tool family.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bgg-hotBInspect

Find the current trending board game hotness on BoardGameGeek (BGG)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Find' implies a read-only operation, the description fails to disclose what data structure is returned (list of games, rankings, IDs), quantity limits, or rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence that immediately conveys the tool's function without redundancy or filler. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of parameters, the description adequately covers inputs. However, with no output schema provided, the description should ideally characterize the returned data (e.g., 'returns ranked list of trending games') to be complete. As is, it is minimally viable but leaves usage gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per evaluation rules, 0 parameters establishes a baseline score of 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Find') and clearly identifies the resource ('current trending board game hotness on BoardGameGeek'). However, it does not explicitly distinguish this discovery-oriented tool from siblings like 'bgg-search' (find specific games) or 'bgg-recommender'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not indicate whether to use this for discovery versus searching for specific titles, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bgg-priceAInspect

Get current prices for board games from multiple retailers using BGG IDs

ParametersJSON Schema
NameRequiredDescriptionDefault
idsYesComma-separated BGG IDs (e.g., '12,844,2096,13857')
currencyNoCurrency code (default: USD)
destinationNoDestination country (default: US)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds 'current' (implies real-time data) and 'multiple retailers' (indicates aggregated sources), but lacks disclosure on error behavior (invalid IDs), rate limits, caching behavior, or authentication requirements. Safe but incomplete behavioral coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with action verb, immediately identifies domain (board games), scope (current prices), sources (multiple retailers), and critical input constraint (BGG IDs). Every phrase earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a 3-parameter retrieval tool with 100% schema coverage. Absence of output schema is acceptable given the intuitive return type (price data), though explicit mention of return structure (e.g., 'returns price data per retailer') would strengthen completeness. Adequate for tool complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description mentions 'BGG IDs' which aligns with the 'ids' parameter, but adds no additional semantic context for 'currency' or 'destination' beyond what the schema already documents. No clarification on why destination matters for pricing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb ('Get'), clear resource ('current prices for board games'), data source ('multiple retailers'), and key input requirement ('using BGG IDs'). Clearly distinguishes from siblings like bgg-search, bgg-details, and bgg-collection by focusing exclusively on commercial pricing data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies prerequisite of having BGG IDs ('using BGG IDs'), suggesting users need IDs beforehand, but does not explicitly state when to use this vs. bgg-search (to find IDs) or clarify workflow. No explicit 'when-not-to-use' or alternative guidance provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bgg-recommenderAInspect

Get game recommendations similar to a specific board game. Provide either 'name' or 'id', not both.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoBGG ID of the game to base recommendations on.
nameNoName of the game to base recommendations on.
min_votesNoMinimum votes threshold (default: 30)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only reveals the input mutual exclusivity constraint. It fails to disclose output format (what fields are returned?), similarity methodology, read-only status, or the impact of min_votes on result quality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first establishes purpose immediately, and the second provides the essential constraint. No fluff or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately covers the input requirements but leaves significant gaps regarding return value structure, recommendation ranking logic, and error conditions (e.g., what happens if neither name nor id is provided?).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage (baseline 3), the description adds critical semantic information not present in the schema: the mutual exclusivity constraint between 'name' and 'id' parameters, which prevents invalid invocation patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get game recommendations'), resource ('board game'), and relationship ('similar to'), effectively distinguishing it from sibling tools like bgg-search (which finds by query) and bgg-details (which retrieves specific game data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an important constraint ('Provide either 'name' or 'id', not both'), but lacks explicit guidance on when to prefer this tool over alternatives like bgg-search or bgg-hot for discovery use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bgg-rulesAInspect

Search BoardGameGeek rules forums to find answers and clarifications from the community. Use when users ask rules questions about board games.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoThe BGG ID of the board game.
nameNoThe name of the board game.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the community-sourced nature of the data ('from the community'), but omits behavioral details like rate limits, search fuzziness, or how it handles the optional ID vs name parameters (both are optional per schema, but the description doesn't clarify usage patterns).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two efficient sentences with zero waste: the first establishes purpose, the second establishes usage context. It is appropriately front-loaded and sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's straightforward purpose and complete input schema, the description adequately covers the primary use case. It could be improved by clarifying the optional parameter relationship (ID vs name) or describing the return format (forum threads vs posts), but it is sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing the baseline score of 3. The description does not add semantic context beyond the schema (e.g., explaining that both parameters are optional or when to prefer ID over name), but it doesn't need to compensate for missing schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Search'), resource ('BoardGameGeek rules forums'), and outcome ('find answers and clarifications from the community'). It clearly distinguishes this from siblings like bgg-search (general) and bgg-details (game metadata) by specifying the 'rules forums' scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit positive guidance ('Use when users ask rules questions about board games'). However, it lacks negative guidance or explicit alternatives (e.g., when to use bgg-details instead for official rulebooks vs community interpretations).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bgg-thread-detailsAInspect

Get full content of a specific BoardGameGeek forum thread, including all posts and replies. Use this after finding relevant threads with bgg-rules.

ParametersJSON Schema
NameRequiredDescriptionDefault
thread_idYesThe BoardGameGeek thread ID to fetch
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses scope behavior ('full content... all posts and replies') indicating a comprehensive fetch rather than summary. However, it lacks disclosure of read-only safety, rate limits, pagination behavior for long threads, or error handling when thread_id is invalid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences total, front-loaded with the action verb. First sentence establishes purpose and scope; second provides workflow context. No redundant or filler content—every word serves the selection decision.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter retrieval tool with simple input structure and no output schema, the description adequately covers the invocation context by mentioning the return content type (posts/replies) and workflow position. Could be improved by hinting at output structure or size limits, but sufficient for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with thread_id already described as 'The BoardGameGeek thread ID to fetch'. The description references 'specific... thread' which aligns with the parameter but adds no additional syntax, format examples, or constraints beyond the schema definition. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with clear resource 'BoardGameGeek forum thread' and scope 'full content... including all posts and replies'. It distinguishes from sibling search tools by specifying this retrieves content of a specific thread ID rather than searching or listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states workflow dependency: 'Use this after finding relevant threads with bgg-rules.' This provides clear guidance on when to invoke the tool relative to its sibling bgg-rules, establishing the correct sequence for forum thread analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bgg-trade-finderAInspect

Find what games user1 owns that user2 has on their wishlist. Shows potential trading opportunities.

ParametersJSON Schema
NameRequiredDescriptionDefault
user1YesBGG username whose collection will be checked. Use 'SELF' for yourself.
user2YesBGG username whose wishlist will be checked against user1's collection
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It explains the matching logic (owns vs wishlist) but fails to disclose read-only status, error handling (invalid usernames), or return format (list of game IDs vs full details).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. The first states the core function immediately; the second states the value proposition. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and no output schema, the description adequately covers the input contract. However, it could briefly mention the return structure (e.g., 'returns matching game titles') since no output schema exists to document the response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the directional relationship (user1's collection vs user2's wishlist) but does not add syntax details, validation rules, or emphasize the 'SELF' convention beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Find') and clearly identifies the resource (games user1 owns vs user2's wishlist). It distinguishes itself from siblings like bgg-collection (simple lookup) and bgg-search by explicitly targeting cross-user trade matching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Shows potential trading opportunities' implies the use case, but there is no explicit guidance on when to use this versus bgg-collection (e.g., 'Use this when you want to find trades between two specific users rather than viewing a single collection').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bgg-userCInspect

Find details about a specific user on BoardGameGeek (BGG)

ParametersJSON Schema
NameRequiredDescriptionDefault
usernameYesBGG username. When the user refers to themselves, use 'SELF'.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Fails to disclose error behavior (what happens if user doesn't exist?), rate limits, authentication requirements, or whether the operation is idempotent. 'Find' implies read-only but doesn't confirm safety explicitly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with verb and resource. No redundant phrases or structural waste. Appropriate length for a simple single-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (1 parameter, no nested objects), the description is minimally sufficient but leaves gaps. Without an output schema, it fails to hint at the return structure (JSON object? XML? specific fields like 'firstname', 'lastlogin'?), which would help the agent understand if this meets its information needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the parameter is fully documented in structured form. Description mentions 'specific user' which loosely maps to the username parameter but adds no semantic value beyond the schema's explanation. Baseline 3 appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the resource (BGG user) and general action ('Find'), but uses vague terminology. 'Find' ambiguously suggests search (conflicting with sibling bgg-search) rather than retrieval, and 'details' fails to specify what data is returned (profile, stats, preferences?). Lacks scope specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus siblings like bgg-collection (which also returns user-associated data) or bgg-search. No mention of prerequisites like username format or visibility restrictions. The 'SELF' keyword hint exists only in the schema, not the description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.