dogsapi
Server Details
DogsAPI MCP — wraps dogapi.dog v2 API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-dogsapi
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 9 of 9 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes (e.g., get_breed vs. list_breeds, remember vs. recall), but ask_pipeworx and discover_tools overlap as both help find or execute tools, which could cause confusion. The dog-specific tools are clear, but the utility tools introduce some ambiguity.
Naming is mixed: dog tools use verb_noun (get_breed, list_breeds), memory tools use verbs (remember, recall, forget), and Pipeworx tools use descriptive phrases (ask_pipeworx, discover_tools). This lacks a unified pattern but remains readable.
With 9 tools, the count is reasonable for a server covering dog breeds and utilities. It's slightly high for a simple dog API but manageable, as each tool serves a purpose without being overwhelming.
For dog breeds, the surface is solid with list, get, and group operations, but lacks update/delete for CRUD. The memory and Pipeworx tools add functionality, but the domain is split, making it feel incomplete for a pure dog API.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it accepts natural language questions, automatically selects tools and fills arguments, and returns results. However, it doesn't mention potential limitations like rate limits, authentication requirements, or error conditions that would be helpful for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly front-loaded with the core functionality in the first sentence, followed by supporting details and examples. Every sentence earns its place by either explaining the tool's value proposition, contrasting with alternatives, or providing concrete usage guidance. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema and no annotations, the description does an excellent job explaining what the tool does and how to use it. The examples provide crucial context about the types of questions that work well. The only minor gap is the lack of information about return format or potential error cases, which prevents a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the baseline would be 3. However, the description adds meaningful context beyond the schema by explaining that the question should be in 'plain English' and providing concrete examples that illustrate the expected format and scope of questions. This enhances understanding of how to use the single parameter effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Ask a question', 'get an answer') and resources ('from the best available data source'). It explicitly distinguishes itself from sibling tools by emphasizing that users don't need to browse tools or learn schemas, setting it apart from tools like discover_tools or list_breeds that might require more technical interaction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('just describe what you need') and when not to use alternatives ('No need to browse tools or learn schemas'). It offers clear examples that demonstrate appropriate use cases, making it easy for an agent to understand this is the tool for natural language queries rather than structured API calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool returns 'the most relevant tools' and suggests calling it first in large tool environments, which adds useful context. However, it lacks details on behavioral traits like rate limits, authentication needs, error handling, or whether results are paginated. The description doesn't contradict any annotations, but it's incomplete for a search tool with no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the second provides critical usage guidance. Every sentence earns its place by adding value without redundancy. It's concise yet informative, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a search function with 2 parameters), no annotations, and no output schema, the description is partially complete. It covers purpose and usage well but lacks details on behavioral aspects and output format. For a tool that returns search results, the absence of output schema means the description should ideally hint at return structure, but it doesn't. It's adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters (query and limit) thoroughly. The description adds no specific parameter semantics beyond what's in the schema, such as query examples or limit implications. With high schema coverage, the baseline is 3, and the description doesn't compensate with extra insights.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions.' This specifies the verb (search), resource (tool catalog), and output format (tools with names and descriptions). It also distinguishes from sibling tools like get_breed or list_facts, which appear unrelated to tool discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This clearly indicates when to use it (when facing many tools and needing discovery) and implies when not to use it (when tools are few or already known). No alternatives are named, but the context is sufficiently clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool deletes a memory, implying a destructive mutation, but doesn't clarify permissions needed, whether deletion is permanent or reversible, error handling (e.g., if the key doesn't exist), or side effects. This is a significant gap for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with zero waste—it directly states the tool's action and target. It's appropriately sized and front-loaded, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a destructive mutation with no annotations and no output schema), the description is incomplete. It lacks crucial behavioral details like error conditions, permanence of deletion, or response format, which are essential for safe and effective use by an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents the single parameter 'key' as 'Memory key to delete'. The description adds no additional meaning beyond this, such as key format examples or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('a stored memory by key'), which is specific and unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'recall' (which likely retrieves memories) or 'remember' (which likely stores memories), though the verb 'Delete' inherently suggests a destructive operation distinct from read operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., the memory must exist), exclusions, or compare it to siblings like 'recall' or 'remember', leaving the agent to infer usage from the tool name and context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_breedCInspect
Get detailed info about a dog breed by ID. Returns characteristics, temperament, origin, size, and health data.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The breed ID (obtained from list_breeds) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves 'detailed information', but doesn't specify what that includes (e.g., traits, history, health data), whether it's a read-only operation, potential errors (e.g., invalid ID), or any rate limits. This leaves significant gaps in understanding the tool's behavior beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any redundant or unnecessary information. It is well-structured and front-loaded, making it easy to understand at a glance, which is ideal for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete for a tool that likely returns complex breed details. It doesn't explain what 'detailed information' entails, such as data structure or fields, leaving the agent uncertain about the return values. For a read operation with no structured output documentation, more context is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'id' parameter clearly documented as 'The breed ID (obtained from list_breeds)'. The description adds no additional semantic details beyond what the schema provides, such as format examples or constraints. Given the high schema coverage, a baseline score of 3 is appropriate as the schema handles the parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('detailed information about a specific dog breed'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'list_breeds' or 'get_groups', which might provide overlapping or related information about breeds or groups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance by mentioning that the ID is 'obtained from list_breeds', implying a prerequisite. However, it lacks explicit instructions on when to use this tool versus alternatives like 'list_breeds' for browsing or 'list_facts' for general facts, and offers no context on exclusions or specific use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_groupsBInspect
Get all AKC dog breed groups (e.g., Sporting, Herding, Terrier). Returns group names and descriptions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states a read operation ('Get'), which implies it's likely safe and non-destructive, but it doesn't mention any behavioral traits such as permissions needed, rate limits, response format, or whether it returns all groups at once. This leaves significant gaps for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose with no wasted words. It directly states what the tool does and includes helpful examples, making it appropriately sized and well-structured for its simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally adequate. It covers the purpose and resource but lacks details on behavioral aspects like response format or usage context. With no output schema, it should ideally hint at what's returned, but the simplicity keeps it from being severely incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description adds value by specifying the resource (dog breed groups) and providing examples, which clarifies the output semantics beyond the empty schema. Baseline is 4 for 0 parameters as per the rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'all dog breed groups', with specific examples (Sporting, Herding, Terrier) that help clarify the domain. It doesn't explicitly differentiate from sibling tools like 'get_breed' or 'list_breeds', but the focus on groups rather than individual breeds or facts provides implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives like 'list_breeds' or 'get_breed'. The description implies it's for retrieving groups, but it doesn't specify use cases, prerequisites, or exclusions, leaving the agent to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_breedsBInspect
Search dog breeds with pagination. Returns breed names, IDs, weight ranges, life spans, and hypoallergenic status. Use get_breed for detailed info on a specific breed.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number for pagination (default: 1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions pagination (a useful behavioral trait) but doesn't cover other important aspects like rate limits, authentication needs, error conditions, or what happens when no breeds match. For a read operation with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core purpose without unnecessary words. It's appropriately sized and front-loaded with the main action, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with 1 parameter and 100% schema coverage, the description provides adequate basic information about what the tool returns. However, with no annotations and no output schema, it should ideally mention more about the response format (e.g., structure of breed details) and behavioral constraints to be truly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the single parameter 'page' is fully documented in the schema). The description doesn't add any parameter-specific information beyond what the schema already provides. According to guidelines, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('paginated list of dog breeds') with specific details about what information is included (weight, life span, hypoallergenic status). It distinguishes from 'get_breed' (singular) and 'get_groups/list_facts' (different resources), though it doesn't explicitly mention sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives like 'get_breed' (for individual breed details) or 'get_groups' (for breed groups). The description implies it's for listing breeds with details, but doesn't specify use cases, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_factsCInspect
Get random dog facts. Returns interesting trivia about dog behavior, history, and abilities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of facts to return (default: 10, max: 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves a list, implying a read-only operation, but doesn't mention any behavioral traits such as rate limits, data freshness, or whether the facts are truly random or cached. This leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's purpose without any unnecessary words. It is front-loaded and efficiently conveys the core functionality, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the returned facts look like (e.g., format, content), any limitations beyond the parameter, or how it differs from sibling tools. For a tool with no structured support, more context is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, fully documenting the 'limit' parameter with its type, default, and max value. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 for adequate but not enhanced coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get a list') and resource ('random dog facts'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its siblings (like 'get_breed' or 'list_breeds'), which might also retrieve dog-related information, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_breed' or 'list_breeds'. It lacks context about what makes 'random dog facts' distinct from other dog-related data, leaving the agent to infer usage based on the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by explaining the dual behavior (retrieve by key vs list all). It clarifies persistence across sessions and the conditional behavior based on parameter presence. It doesn't mention error handling or performance characteristics, keeping it at 4 rather than 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence states the dual functionality clearly, and the second sentence provides essential usage context. Every word earns its place, and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides good coverage of purpose, usage, and parameter semantics. It doesn't describe the return format or error conditions, which would be helpful given the lack of output schema, but it's reasonably complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, so baseline is 3. The description adds meaningful context by explaining the semantic effect of omitting the key parameter ('omit to list all keys') and connecting the parameter to the tool's purpose ('Memory key to retrieve'). This provides valuable guidance beyond the schema's technical specification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key to list all keys'), giving clear operational instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It discloses important behavioral traits: the storage is session-based, authenticated users get persistence, and anonymous sessions have a 24-hour limit. This covers key operational aspects like persistence and session handling, though it could mention limitations like storage capacity or key constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, and subsequent sentences add valuable context without redundancy. Every sentence earns its place by explaining usage and behavioral details efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (storage with session handling), no annotations, and no output schema, the description is largely complete. It covers purpose, usage, and key behavioral traits. However, it lacks details on error cases (e.g., duplicate keys) or return values, which would enhance completeness for a tool with no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so parameters are well-documented in the schema. The description does not add significant meaning beyond the schema (e.g., it doesn't explain key naming conventions or value formatting further). With high schema coverage, the baseline is 3, as the description provides minimal extra param context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('store a key-value pair') and resource ('in your session memory'). It distinguishes from sibling tools like 'recall' (likely for retrieval) and 'forget' (likely for deletion) by focusing on storage. The description goes beyond the name 'remember' to explain the storage mechanism.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('save intermediate findings, user preferences, or context across tool calls'), giving practical examples. However, it does not explicitly state when NOT to use it or name alternatives (e.g., 'recall' for retrieval), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!