wikifeed
Server Details
Wikifeed MCP — wraps Wikimedia Feed API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-wikifeed
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 8 of 8 tools scored.
Most tools have distinct purposes, such as retrieving Wikipedia content (featured_article, most_read, on_this_day, picture_of_day) and managing session memory (remember, recall, forget). However, discover_tools overlaps with the memory tools in a general 'search/retrieve' function, which could cause minor confusion if an agent needs to find tools versus stored data.
The naming is mostly consistent with clear verb_noun patterns (e.g., featured_article, picture_of_day, on_this_day) and action-oriented verbs (remember, recall, forget). A minor deviation is discover_tools, which uses 'discover' instead of a more standard verb like 'search', but it still follows the pattern and remains readable.
With 8 tools, the count is well-scoped for a server focused on Wikipedia content retrieval and session memory management. Each tool serves a specific function, and there is no bloat or missing essential operations, making it efficient for agents to use.
The tool set covers core Wikipedia content retrieval (featured articles, most-read, historical events, pictures) and basic session memory CRUD operations (remember, recall, forget). A minor gap is the lack of tools for searching or browsing Wikipedia articles beyond featured/most-read content, but agents can work around this using the existing tools for common tasks.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains that Pipeworx selects tools and fills arguments automatically, which adds useful context beyond the basic input schema. However, it lacks details on limitations, error handling, or response format, leaving gaps in behavioral understanding for the agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: it opens with the core functionality, explains the mechanism, and provides concrete examples. Every sentence adds value without redundancy, making it easy for an agent to quickly grasp the tool's purpose and usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (automated tool selection and argument filling) and lack of annotations or output schema, the description is moderately complete. It covers the high-level workflow and input expectations but omits details on output structure, error cases, or performance constraints, which could hinder agent effectiveness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'question' parameter as 'Your question or request in natural language.' The description adds minimal value by reinforcing this with 'plain English' and examples, but does not provide additional syntax or format details beyond what the schema specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes from siblings by emphasizing natural language input versus tool-specific schemas.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: for asking questions in plain English without needing to browse tools or learn schemas. It includes examples like 'What is the US trade deficit with China?' to illustrate appropriate use cases. However, it does not explicitly state when NOT to use it or name alternatives among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a search operation (implying read-only), returns relevant tools with metadata, and suggests it's a preliminary step. However, it lacks details on rate limits, error handling, or authentication needs, which would be beneficial for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two sentences that each earn their place: the first defines the purpose and output, and the second provides critical usage guidelines. There is zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters), no annotations, and no output schema, the description is mostly complete. It covers purpose, usage context, and output type, but could improve by mentioning the absence of an output schema or potential limitations like result ordering. It compensates well for the lack of structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('query' and 'limit') thoroughly. The description adds no additional parameter semantics beyond what's in the schema, such as examples or usage tips for the query parameter. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resources ('tool catalog'), and distinguishes it from siblings by emphasizing its search functionality versus the siblings' content-focused tools like 'featured_article' or 'picture_of_day'. It explicitly mentions returning 'most relevant tools with names and descriptions'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This includes a specific condition (500+ tools) and a clear alternative (using this before other tools), with no misleading exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
featured_articleBInspect
Get Wikipedia's featured article for a specific date.
| Name | Required | Description | Default |
|---|---|---|---|
| day | Yes | Two-digit day number (e.g., "01", "15") | |
| year | Yes | Four-digit year (e.g., "2024") | |
| month | Yes | Two-digit month number (e.g., "01", "12") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe behavioral traits such as whether it's read-only, potential rate limits, error handling (e.g., for invalid dates), or the format of the returned article (e.g., text, HTML, summary). This leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It directly communicates the tool's function in a structured and easily digestible manner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the output looks like (e.g., article title, content, links), potential limitations (e.g., date range constraints), or error scenarios. This makes it inadequate for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with clear descriptions for all three parameters (year, month, day) including format examples. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 without compensating or adding extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('Wikipedia's featured article') with precise scope ('for a specific date'). It distinguishes from sibling tools like 'most_read', 'on_this_day', and 'picture_of_day' by focusing exclusively on featured articles rather than other content types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a user wants Wikipedia's featured article for a particular date, but it doesn't explicitly state when to use this tool versus alternatives like 'on_this_day' (which might provide historical events) or 'picture_of_day' (which provides images). No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Delete' implies a destructive mutation, it doesn't specify whether deletion is permanent, reversible, requires specific permissions, or has side effects. For a destructive operation with zero annotation coverage, this is a significant gap in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action ('Delete') and resource ('stored memory'), making it immediately understandable without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't address critical aspects like what happens after deletion (e.g., confirmation, error handling), whether the operation is idempotent, or what the return value might be. Given the complexity of a delete operation, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds minimal value beyond this, merely restating that deletion is by key. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Delete') and resource ('a stored memory by key'), distinguishing it from sibling tools like 'recall' (likely retrieves memories) and 'remember' (likely stores memories). It uses precise terminology that communicates the tool's function without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'recall' or 'remember', nor does it mention prerequisites or constraints. It simply states what the tool does without contextual usage information, leaving the agent to infer when deletion is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
most_readBInspect
Get the most-read Wikipedia articles for a specific date.
| Name | Required | Description | Default |
|---|---|---|---|
| day | Yes | Two-digit day number (e.g., "01", "15") | |
| year | Yes | Four-digit year (e.g., "2024") | |
| month | Yes | Two-digit month number (e.g., "01", "12") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves data ('Get'), implying a read-only operation, but doesn't specify whether it requires authentication, has rate limits, returns structured data, or handles errors. For a tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('Get', 'most-read Wikipedia articles', 'for a specific date') earns its place by contributing essential information. There is no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 required parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on behavioral traits, usage context, and output format. Without annotations or output schema, the agent has incomplete information about what the tool returns or how it behaves.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with all three parameters (year, month, day) clearly documented in the input schema. The description adds no additional parameter semantics beyond implying a date context. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'most-read Wikipedia articles' with the specific context 'for a specific date'. It distinguishes itself from siblings like 'featured_article' or 'picture_of_day' by focusing on popularity metrics rather than curated content. However, it doesn't explicitly contrast with 'on_this_day' which might also involve date-based queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites like date availability or historical limits, nor does it suggest when other tools like 'featured_article' might be more appropriate. The agent must infer usage solely from the tool name and description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
on_this_dayBInspect
Get historical events, births, deaths, and holidays that occurred on a given month and day across all years.
| Name | Required | Description | Default |
|---|---|---|---|
| day | Yes | Two-digit day number (e.g., "01", "15", "31") | |
| month | Yes | Two-digit month number (e.g., "01" for January, "12" for December) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves data but does not describe response format, error handling, rate limits, or authentication needs. For a read-only tool with no annotations, this leaves significant gaps in understanding its operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose without unnecessary details. It is front-loaded with the core functionality and uses clear, direct language, making it easy for an agent to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 simple parameters) and high schema coverage, the description is adequate for basic understanding. However, with no output schema and no annotations, it lacks details on return values and behavioral traits, which could hinder an agent's ability to use the tool effectively in more complex scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with both parameters ('month' and 'day') fully documented in the input schema. The description adds no additional parameter semantics beyond implying the tool uses these inputs to filter historical data. This meets the baseline of 3 when the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and the resource ('historical events, births, deaths, and holidays'), with precise scope ('on a given month and day across all years'). It distinguishes itself from sibling tools like 'featured_article', 'most_read', and 'picture_of_day' by focusing on historical data retrieval rather than current or featured content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, exclusions, or specific contexts where this tool is preferred over sibling tools. The agent must infer usage based solely on the purpose statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
picture_of_dayBInspect
Get Wikipedia's picture of the day for a specific date, including title, description, and image URL.
| Name | Required | Description | Default |
|---|---|---|---|
| day | Yes | Two-digit day number (e.g., "01", "15") | |
| year | Yes | Four-digit year (e.g., "2024") | |
| month | Yes | Two-digit month number (e.g., "01", "12") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states what the tool does (retrieves data) but lacks behavioral details such as rate limits, error handling, authentication needs, or whether it's a read-only operation. It doesn't disclose any traits beyond the basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and key details (title, description, image URL). Every word earns its place with no redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 required parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks completeness in behavioral transparency and usage guidelines, leaving gaps for an AI agent to infer operational details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the parameters (year, month, day with formats). The description adds no additional meaning beyond implying date specificity, which is already clear from the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('Wikipedia's picture of the day'), specifying what information is retrieved (title, description, image URL). It distinguishes from siblings like 'featured_article' or 'most_read' by focusing on the picture of the day, though it doesn't explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for a specific date ('for a specific date'), suggesting when to use it, but provides no explicit guidance on when not to use it or alternatives (e.g., vs. 'on_this_day' for historical events). Usage context is implied but not detailed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that memories can be retrieved from current or previous sessions, which adds useful context about persistence. However, it lacks details on error handling (e.g., what happens if a key doesn't exist), performance aspects, or any limitations like rate limits or memory size constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality in the first sentence and uses a second sentence to provide context, with zero wasted words. Every sentence earns its place by adding clear value, making it appropriately sized and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (one optional parameter, no output schema, no annotations), the description is fairly complete. It covers purpose, usage, and parameter semantics adequately. However, it could improve by addressing potential edge cases or output format, as there's no output schema to rely on.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds value by explaining the semantic effect of omitting the key ('omit to list all keys'), which clarifies the dual functionality beyond the schema's technical specification. This compensates well, though it doesn't provide additional details like key format or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes between retrieval by key and listing all memories, though it doesn't explicitly differentiate from sibling tools like 'remember' or 'forget' beyond the basic function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use the tool ('to retrieve context you saved earlier in the session or in previous sessions') and includes a usage rule ('omit key' to list all keys). However, it doesn't explicitly state when not to use it or name alternatives among sibling tools, such as when to use 'remember' vs. 'recall'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses persistence traits ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which are not inferable from the input schema. However, it lacks details on rate limits, error conditions, or exact memory constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose, followed by usage context and behavioral details. Every sentence adds value: the first defines the tool, the second provides critical persistence information. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 2 parameters with full schema coverage, the description is mostly complete: it covers purpose, usage, and key behavioral traits (persistence rules). However, it lacks details on return values or error handling, which would be helpful for a storage tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or usage tips, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), with explicit examples of what to store ('intermediate findings, user preferences, or context across tool calls'). It distinguishes from sibling tools like 'recall' (likely for retrieval) and 'forget' (likely for deletion).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides clear context on when to use ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly state when not to use or name alternatives (e.g., 'recall' for retrieval). The mention of 'session memory' implies it's for temporary storage, but no direct exclusions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!