Skip to main content
Glama

Server Details

Boardgames MCP — wraps Board Game Atlas API (public demo client_id, free)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-boardgames
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation3/5

The tools have mixed purposes: three are for board game queries (get_game, hot_games, search_games), while the others are meta-tools for memory and tool discovery. The board game tools are distinct enough (specific ID, popularity ranking, text search), but the meta-tools like ask_pipeworx and discover_tools overlap in scope (both help find or use tools) and could cause confusion.

Naming Consistency4/5

Tool names mostly follow a verb_noun pattern (e.g., search_games, get_game, remember, recall), which is consistent. However, 'hot_games' breaks this pattern (adjective_noun) and 'ask_pipeworx' uses a proper noun. Overall, the naming is clear and mostly consistent.

Tool Count4/5

With 8 tools, the count is appropriate for a board game server that also includes memory utilities and a meta-query tool. It's slightly on the higher side for a domain-specific server, but not excessive.

Completeness3/5

The board game tools cover lookup by ID, search by name, and hot games, but lack create, update, or delete operations, which is acceptable for a read-only data source. However, the inclusion of memory and meta-tools (ask_pipeworx, discover_tools) seems out of scope for a board game server, making the tool set feel incomplete for a dedicated board game purpose.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes that Pipeworx picks the right tool, fills arguments, and returns the result, which explains its internal behavior beyond the schema. With no annotations, this is valuable context, though it could mention any limitations or errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very concise and front-loaded: first sentence states purpose, second explains behavior, third gives examples. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (1 required param, no output schema), the description is nearly complete. It explains what the tool does and how to use it, though it could briefly mention possible limitations or that it returns text.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers the single parameter 'question' with a clear description. The description adds usage examples but no additional parameter semantics beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts plain English questions and returns answers by selecting the best data source. It distinguishes itself from siblings like search_games or discover_tools by emphasizing natural language and automatic tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'no need to browse tools or learn schemas — just describe what you need,' and provides concrete examples (trade deficit, adverse events, 10-K filing). This clarifies when to use this tool vs. other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Describes behavior as search returning most relevant tools with names and descriptions. Does not mention potential side effects or auth needs, but for a search tool this is acceptable. Lacks detail on whether it mutates state, but implied read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each serving a purpose: first states action, second describes returns, third gives usage guidance. No fluff. Information is front-loaded with the key purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low parameter count (2), full schema coverage, no output schema, and no annotations, description is complete enough. It explains what the tool does and when to use it. Could mention that results include tool descriptions to aid selection, but that's already implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds value by explaining the query parameter as 'Natural language description' and gives examples, and mentions default and max for limit. This enhances understanding beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Search' and resource 'Pipeworx tool catalog', specifying the tool returns relevant tools with names and descriptions. It distinguishes from siblings by indicating it is for finding tools among many, while siblings like search_games and get_game focus on games.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task', providing clear when-to-use guidance. No explicit when-not-to-use, but the context of 500+ tools implies it's for initial discovery before using other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It clearly states the action (delete) and that it operates by key, but does not disclose side effects (e.g., whether deletion is permanent, confirmation steps, or if it fails silently). Adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 5 words, zero waste. Front-loaded with action and resource. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given only 1 required parameter, no output schema, and simple semantics, the description is sufficient for a straightforward delete operation. It lacks behavioral details but is complete for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (key is described as 'Memory key to delete'). The description does not add new meaning beyond the schema; it restates the purpose. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the verb 'Delete' and specifies the resource 'stored memory', with 'by key' clarifying the scope. It is clear and distinguishes from siblings like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives, but it is clear that this is for deletion while siblings like 'recall' and 'remember' are for retrieval and storage. No when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_gameAInspect

Get full details for a specific board game by ID (from search_games results). Returns name, year, players, playtime, description, rating, publisher, designer, and price.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesBoard Game Atlas game ID (e.g. "OIXt3DmJU0" for Catan)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states the tool returns specific fields, which is helpful, but does not disclose whether it may return null values, how errors are handled (e.g., invalid ID), or any rate limits. This is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with purpose, and each sentence adds value. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one simple parameter and no output schema, the description sufficiently explains what is returned. It is complete enough for this low complexity tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the parameter is already documented in the schema. The description does not add additional meaning beyond what the schema provides (e.g., format or validation rules), so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'full details for a specific board game', and specifies it uses a Board Game Atlas ID, which distinguishes it from sibling tools like search_games or hot_games.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when you have a specific game ID), but does not explicitly state when not to use it or mention alternatives. Sibling tools like search_games could be mentioned for lookup by name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hot_gamesAInspect

Get the most popular board games ranked by current buzz. Returns title, year, player count, playtime, rating, and rank. Use this to discover trending games.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1–100, default 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses read-only nature (no side effects), but does not mention rate limits, caching, or pagination behavior. Returns specific fields but no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose and ordering, second lists returned fields. No unnecessary words, front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description covers key return fields and ordering. Parameter schema is complete. Could mention that result set is not filterable beyond limit, but overall adequate for a simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Single parameter 'limit' with 100% schema coverage including description and default. Description adds that results are ordered by rank, which complements the parameter semantics by implying limit constrains the top N results.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Get', resource 'most popular board games', and ordering by popularity rank. Differentiates from sibling tools like 'get_game' (single game) and 'search_games' (filtered search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for fetching popular games, but no explicit guidance on when not to use (e.g., if user wants filtered search) or alternatives. Sibling tool names suggest alternatives but description does not mention them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Describes behavior: retrieves by key or lists all. But does not disclose whether retrieval is case-sensitive, whether it returns full content or just metadata, or what happens if key does not exist (error vs empty result). Adequate but could be more transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. Front-loaded with purpose and alternative behavior. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description adequately explains behavior for a simple retrieval tool. Could mention what happens when key is missing, but overall complete enough for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds value by explaining that omitting key lists all memories, which goes beyond the schema description. Effectively explains the optional nature of the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves a stored memory by key or lists all memories when key is omitted. Uses specific verb 'retrieve' and resource 'memory', and distinguishes between two modes. No sibling tool does exactly this, so differentiation is clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to omit key to list all memories. Implicitly suggests using it for retrieving context from earlier sessions. However, no guidance on when to use sibling tools like 'remember' (which stores) or 'forget' (which deletes), but context suggests these are complementary.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden of behavioral disclosure. It discloses persistence behavior ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which is helpful. However, it does not mention any limitations (e.g., maximum key length, value size, or number of stored pairs), nor does it specify whether overwriting an existing key is allowed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each carrying meaningful information: purpose, usage guidance, and behavioral note. No fluff or repetition. Front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is largely complete. It explains what the tool does, when to use it, and persistence details. A minor gap is the lack of explicit mention about overwriting behavior or limits, but overall it suffices.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by providing example keys (e.g., 'subject_property', 'target_ticker') and explaining that value can be any text. This contextualizes the parameters beyond the schema's generic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Store a key-value pair in your session memory.' It specifies the resource ('session memory') and the action ('store'), and provides concrete use cases (saving intermediate findings, user preferences, context across tool calls).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives usage context: 'Use this to save intermediate findings, user preferences, or context across tool calls.' It also notes persistence differences between authenticated and anonymous users. However, it does not explicitly exclude when not to use or mention alternatives like 'forget' or 'recall'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_gamesAInspect

Search for board games by name. Returns title, year, player count, playtime, rating, price, and description. Use this to find games before fetching full details.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesBoard game name or partial name to search for, e.g. "Catan" or "Ticket to Ride"
limitNoNumber of results to return (1–100, default 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It mentions the external API (Board Game Atlas) and return fields, but does not state whether the tool is read-only, requires authentication, or any side effects. For a search tool, this is adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose and return fields. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is sufficient for a simple search tool with two parameters. However, it lacks information about pagination, error handling, or any rate limits from the external API. With no output schema, the description could have provided more detail about the return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description does not need to add parameter details. It lists return fields, which adds some context, but does not clarify the 'limit' parameter beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for board games by name using Board Game Atlas, and lists the return fields (name, year, player count, etc.). It distinguishes itself from sibling tools like 'get_game' (which likely returns a single game) and 'hot_games' (which returns trending games).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching games by name but does not explicitly state when to use this versus alternatives like 'get_game' for a single game or 'hot_games' for trending games. No exclusion criteria or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.