Skip to main content
Glama

Server Details

Wikipedia MCP — wraps Wikipedia REST API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-wikipedia
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between ask_pipeworx and search_wikipedia/discover_tools, as ask_pipeworx can handle queries that might otherwise use those tools. However, the descriptions clarify that ask_pipeworx is a higher-level abstraction, reducing confusion. The memory tools (remember, recall, forget) are clearly separate from Wikipedia-specific tools.

Naming Consistency3/5

The naming is mixed with no consistent pattern. Wikipedia tools use verb_noun (e.g., get_article_summary, search_wikipedia), while memory tools use simple verbs (remember, recall, forget), and ask_pipeworx/discover_tools have unique names. This inconsistency makes the set less predictable, though the names are still readable and descriptive.

Tool Count4/5

With 9 tools, the count is reasonable for a Wikipedia server that also includes memory and query abstraction features. It covers core Wikipedia operations (summary, sections, search, random) and adds utility tools, but it might be slightly over-scoped by including ask_pipeworx and discover_tools, which feel like they belong to a broader toolkit.

Completeness4/5

For Wikipedia operations, the tools provide good coverage: reading (summary, sections, search, random) is well-handled, though editing or advanced querying is missing, which is typical for read-only access. The memory tools add useful session management, and ask_pipeworx offers a high-level query interface, but there are minor gaps like direct article content fetching or language support.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It explains key behaviors: Pipeworx picks the tool, fills arguments, and returns results. However, it lacks details on limitations (e.g., data source reliability, response time, error handling) or prerequisites. The description doesn't contradict annotations, but could be more comprehensive for a tool with automated decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: first sentence states core functionality, second explains the automation benefit, third provides usage guidance with examples. Every sentence adds value, there's no redundancy, and key information is front-loaded. The examples are brief and illustrative without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (automated tool selection with natural language input) and lack of both annotations and output schema, the description does well to explain the core workflow and provide examples. However, it could better address potential limitations or error scenarios. For a single-parameter tool with good schema coverage, it's mostly complete but has minor gaps in behavioral transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with one parameter well-documented in the schema. The description adds meaningful context by specifying 'question or request in natural language' and providing concrete examples that illustrate the parameter's expected format and scope. This enhances understanding beyond the schema's basic type definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Ask a question', 'get an answer') and resources ('best available data source'), distinguishing it from siblings like search_wikipedia or get_article_summary by emphasizing natural language processing and automated tool selection. It explicitly contrasts with sibling tools by stating 'No need to browse tools or learn schemas'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('just describe what you need' in plain English) and when not to use alternatives (no need to browse other tools or learn schemas). The examples illustrate appropriate use cases, and the context implies this is for general queries rather than specific operations like get_article_sections.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search functionality and return format ('most relevant tools with names and descriptions'), but doesn't mention rate limits, authentication requirements, error conditions, or how relevance is determined. The description adds useful context about when to call it first, but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence explains the core functionality, and the second provides crucial usage guidance. There's zero wasted language or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description provides good context about purpose and usage. However, it doesn't describe the return format in detail (beyond 'most relevant tools with names and descriptions') or potential limitations. Given the 100% schema coverage and clear purpose, it's mostly complete but could benefit from more output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't elaborate on query formulation strategies or limit considerations). The baseline of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resources ('Pipeworx tool catalog', 'most relevant tools with names and descriptions'). It distinguishes this tool from its siblings (which are all Wikipedia-related) by focusing on tool discovery rather than content retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a specific threshold (500+ tools) and context (finding tools for a task). It also implicitly distinguishes from siblings by focusing on tool discovery rather than Wikipedia operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While 'Delete' implies a destructive mutation, it doesn't specify whether the deletion is permanent, reversible, requires specific permissions, or what happens on success/failure. For a destructive tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without any wasted words. It's appropriately sized for a simple deletion tool and front-loads the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive mutation tool with no annotations and no output schema, the description is incomplete. It doesn't address critical context like what constitutes a valid memory key, whether deletion is idempotent, what confirmation or error messages to expect, or how this tool relates to the memory system implied by sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'key' fully documented in the schema as 'Memory key to delete'. The description adds no additional parameter semantics beyond what the schema already provides, so it meets the baseline score when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the target resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly distinguish from sibling tools like 'recall' or 'remember', but the verb 'Delete' provides inherent differentiation from read operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory key), nor does it clarify relationships with sibling tools like 'recall' (which likely retrieves memories) or 'remember' (which likely creates them).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_article_sectionsAInspect

Get the section outline of a Wikipedia article by title. Returns all headings and hierarchy to navigate content structure.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesWikipedia article title (e.g., "World War II")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states what it returns (list of sections) without covering behavioral aspects like error handling, rate limits, authentication needs, or whether it's read-only/destructive. This is a significant gap for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and concise with two sentences that efficiently convey purpose and output without any wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with one parameter and no output schema, the description covers basic purpose and output but lacks behavioral context (e.g., error cases, limitations). It's minimally adequate but has clear gaps in completeness given the absence of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'title' with its description. The description adds no additional parameter semantics beyond what the schema provides, meeting the baseline of 3 for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('section structure of a Wikipedia article'), specifying it returns a table of contents with titles and heading levels. It distinguishes from siblings like get_article_summary (summary content) and search_wikipedia (search functionality).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'by title,' which suggests it's for retrieving structure of a known article, contrasting with search_wikipedia for unknown articles. However, it doesn't explicitly state when not to use it or name alternatives, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_article_summaryAInspect

Get a Wikipedia article overview by title. Returns intro text, description, thumbnail image, and related content links.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesWikipedia article title (e.g., "Albert Einstein")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses behavioral traits by stating what data is returned (introduction extract, description, thumbnail URL, content URLs), which helps the agent understand the output format. However, it doesn't mention potential errors (e.g., if the article doesn't exist), rate limits, or authentication needs, leaving gaps in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and resource, then lists the returned data without unnecessary details. Every part earns its place by clarifying the tool's purpose and output, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is mostly complete. It covers the purpose and output format adequately. However, it lacks error handling or edge case information (e.g., handling non-existent titles), which would enhance completeness for a read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'title' fully documented in the schema as 'Wikipedia article title (e.g., "Albert Einstein")'. The description adds no additional parameter semantics beyond what the schema provides, such as formatting constraints or examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a summary'), target resource ('Wikipedia article by title'), and distinguishes from siblings by focusing on summary extraction rather than sections, random articles, or search. It explicitly mentions what information is returned, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying 'by title' and listing returned data, which suggests it's for retrieving structured summaries. However, it doesn't explicitly state when to use this tool versus alternatives like get_article_sections (for detailed structure) or search_wikipedia (for finding articles). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_random_articlesBInspect

Discover random Wikipedia articles for serendipitous learning. Returns title, introduction text, and page ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of random articles to fetch (1-10, default 5)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return format ('title, extract, and page ID for each article'), which adds value beyond the input schema. However, it lacks details on potential limitations (e.g., rate limits, data freshness, or error conditions), which is a significant gap for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and key details (return format) with zero waste. It is appropriately sized for the tool's simplicity, making every word count without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter, no output schema, no annotations), the description is adequate but has clear gaps. It covers the basic purpose and return format, but lacks usage guidelines and full behavioral context (e.g., error handling or constraints), making it minimally viable but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the 'count' parameter with its type, range, and default. The description does not add any parameter-specific information beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get random Wikipedia articles') and resource ('Wikipedia articles'), specifying what the tool does. It distinguishes from siblings like 'get_article_sections' or 'search_wikipedia' by focusing on random retrieval rather than specific articles or searches. However, it doesn't explicitly contrast with siblings in the text, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_wikipedia' or 'get_article_summary'. It implies usage for fetching random articles but lacks explicit context, prerequisites, or exclusions, leaving the agent to infer based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool retrieves or lists memories stored earlier, implying it's a read-only operation. However, it doesn't mention potential limitations like session persistence, memory size constraints, or error behaviors (e.g., what happens if a key doesn't exist). The description adds some context but lacks detailed behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficiently structured in two sentences. The first sentence states the core functionality, and the second provides usage context. Every sentence earns its place with no redundant information, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (one optional parameter, no output schema), the description is mostly complete. It covers purpose, usage, and parameter semantics adequately. However, without annotations or an output schema, it could benefit from mentioning return formats (e.g., what a retrieved memory looks like) or error cases, leaving minor gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents the single parameter 'key' with its purpose. The description adds value by explaining the semantics: omitting the key lists all memories, while providing it retrieves a specific one. This clarifies the dual functionality beyond what the schema provides, though it doesn't add format or validation details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory by key', 'all stored memories'). It distinguishes this tool from siblings like 'remember' (which stores) and 'forget' (which deletes), making it evident this is for retrieval operations only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter to list all memories, offering clear usage instructions that help differentiate it from alternatives like 'search_wikipedia' or 'get_article_sections'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains key behavioral traits: the tool performs a write operation ('Store'), specifies persistence differences ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and implies session-scoped storage. However, it does not mention potential limitations like size constraints or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage examples and behavioral details. Every sentence adds value without redundancy, and the length is appropriate for the tool's complexity. It efficiently communicates essential information in three concise sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, and key behavioral aspects like persistence. However, it lacks details on return values or error handling, which would be beneficial since there is no output schema. It compensates well but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters ('key' and 'value') with examples. The description adds minimal value beyond the schema, as it does not provide additional syntax, format details, or constraints. The baseline score of 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (which likely retrieves) and 'forget' (which likely deletes). It provides concrete examples of what to store ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but it does not explicitly state when not to use it or name alternatives (e.g., 'recall' for retrieval). The guidance is helpful but lacks explicit exclusions or comparisons to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_wikipediaBInspect

Search Wikipedia for articles by keyword. Returns matching titles, snippets, page IDs, and word counts. Use get_article_summary to read full content.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1-50, default 10)
queryYesSearch query
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return format (title, snippet, page ID, word count) but lacks details on rate limits, authentication needs, error handling, or pagination. For a search tool, this leaves gaps in understanding operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of two sentences that directly state the action and return values without unnecessary details. Every sentence earns its place by providing essential information efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search operation with 2 parameters) and no annotations or output schema, the description is minimally adequate. It covers the purpose and return format but lacks behavioral details like error handling or performance limits. Without an output schema, it should ideally explain return values more thoroughly, but it does specify key fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (query and limit). The description adds no additional meaning beyond what the schema provides, such as examples or contextual usage for parameters. Baseline 3 is appropriate as the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as searching Wikipedia articles by keyword, which is a specific verb (search) and resource (Wikipedia articles). It distinguishes from siblings like get_article_summary or get_random_articles by focusing on keyword-based search rather than retrieving specific articles or random content. However, it doesn't explicitly contrast with get_article_sections, which might also involve article retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention scenarios where search_wikipedia is preferred over siblings like get_article_summary for summaries or get_random_articles for random content, nor does it specify prerequisites or exclusions. Usage is implied by the purpose but not explicitly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.