Fred
Server Details
FRED MCP — Federal Reserve Economic Data (St. Louis Fed)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-fred
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes (FRED operations, memory, discovery), but 'ask_pipeworx' and 'discover_tools' overlap somewhat: both can be used to find data, though ask_pipeworx is more of a direct query tool while discover_tools is for browsing the tool catalog. The memory tools (remember, recall, forget) are clearly distinct from the FRED tools.
Tool names follow a consistent snake_case pattern with a verb_noun structure (e.g., fred_search, fred_get_series, discover_tools). The exceptions are 'ask_pipeworx' (a phrase) and 'forget' (single verb), which deviate slightly but are still clear.
10 tools is a reasonable count for a server that combines memory functions and FRED data access. The scope is well-defined, and each tool serves a specific purpose. It could be slightly lean on the FRED side (e.g., missing update/delete for series), but overall appropriate.
For the FRED domain, the tools provide search, info, and data retrieval, but lack CRUD operations (e.g., no create/update/delete series). The memory tools (remember, recall, forget) form a complete set. The discovery tool (discover_tools) and ask_pipeworx are unique but the overall surface feels sufficient for common tasks.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It states that Pipeworx 'picks the right tool, fills the arguments, and returns the result', which discloses the autonomous behavior. However, it does not mention potential latency, fallback behavior if no tool matches, or any limits on question complexity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at 3 sentences with front-loaded purpose. The examples add value but could be seen as slightly excessive. Overall, it's well-structured and earns its length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 param, no nested objects, no output schema), the description is reasonably complete. It explains the tool's core function and usage. However, it could mention that the answer might come from multiple internal tools and that results are not cached or revisitable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description need not add much. The description explains the 'question' parameter as 'your question or request in natural language', which is slightly more informative than the schema's description, but adds little extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: ask a natural language question and get an answer from the best data source. It distinguishes itself by not requiring tool or schema knowledge, unlike sibling tools like fred_search or discover_tools which are more structured.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives clear usage guidance: ask in plain English, no need to browse tools or learn schemas. It provides three concrete examples, which implicitly suggest when to use this tool over more specific tools like fred_series_info or fred_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must cover behavioral traits. It mentions it returns 'most relevant tools with names and descriptions', but does not disclose details like performance, ordering, or any side effects. However, as a search tool, side effects are minimal, so a score of 3 is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains what is returned ('names and descriptions'), and for a simple search tool, this is complete. The context of 500+ tools is addressed, and the instruction to call it first ensures proper workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add parameter-specific meaning beyond the schema, but the schema already provides good descriptions for both parameters. No additional value added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Search') and resource ('Pipeworx tool catalog'), clearly stating what the tool does. It distinguishes itself from siblings by positioning itself as a discovery tool for the catalog, which is unique among the sibling tools like fred_search or ask_pipeworx.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use it ('Call this FIRST when you have 500+ tools available'), implying it is a prerequisite for selecting other tools. Provides a clear instruction to the agent, effectively guiding its usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states deletion but omits details on irreversibility, side effects, or error handling. Agent needs more behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words, front-loaded with action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with one param and no output schema, but description lacks important behavioral details like permanence or permission requirements, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so schema already documents the key parameter. Description adds no further meaning beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Delete' and resource 'stored memory by key', clearly distinguishing from sibling tools like 'recall' and 'remember'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives mentioned, but context implies deletion is for removing specific memories; sibling names provide implicit differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fred_categoryAInspect
Browse economic data by category (housing, employment, money/banking, etc.). Returns subcategories and related series IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | FRED API key | |
| category_id | No | Category ID to browse children of (default: 0 for root) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It implies a read-only operation (browsing) but doesn't disclose return format, pagination, or any rate limits. Since it's a simple browse operation with no destructive actions, the lack of detail is acceptable but not ideal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise at two sentences, with the key information front-loaded. Every sentence provides valuable guidance, including examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 params, no output schema, no nested objects), the description is sufficient. It explains the tool's purpose and usage, though it could mention that it returns child categories or series count, but it's not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so both parameters are documented. The description adds context that category_id=0 is the root and provides example IDs, but doesn't add meaning beyond the schema's descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Browse FRED categories' and specifies the root category ID, distinguishing it from siblings like fred_get_series and fred_search. It provides specific examples of category IDs for popular topics, making the purpose very clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit usage guidance: start with category_id=0 for the root, and suggests using it for exploring available data by topic. However, it doesn't explicitly state when not to use it or mention alternatives, though siblings like fred_search are distinct.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fred_get_seriesBInspect
Fetch historical data points for an economic indicator by series ID (e.g., 'MORTGAGE30US' for 30-year mortgage rate, 'HOUST' for housing starts). Returns dates and values.
| Name | Required | Description | Default |
|---|---|---|---|
| units | No | Data transformation: lin (levels), chg (change), ch1 (change from year ago), pch (% change), pc1 (% change from year ago), pca (compounded annual rate of change), cch (continuously compounded rate of change), cca (continuously compounded annual rate of change), log (natural log). Default: lin | |
| _apiKey | Yes | FRED API key | |
| frequency | No | Frequency aggregation: d, w, bw, m, q, sa, a (optional) | |
| series_id | Yes | FRED series ID (e.g., "MORTGAGE30US", "HOUST", "CSUSHPISA") | |
| observation_end | No | End date in YYYY-MM-DD format (optional) | |
| observation_start | No | Start date in YYYY-MM-DD format (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not detail behavioral traits beyond fetching observations. Annotations are empty, so no contradiction. The description adds value by listing example series IDs, but does not disclose rate limits, data scope, or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with a clear first sentence stating the purpose, followed by a list of key series IDs. It is front-loaded and efficient, though the list could be shortened or referenced via the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, no output schema, no annotations), the description is adequate but incomplete. It does not explain the output format or what observations entail, which could be critical for an agent. The list of series IDs is helpful but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for all parameters. The description does not add additional meaning beyond the schema, as it only mentions series IDs and lacks parameter details. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets observations/data points for a FRED series, which is a specific verb-resource combination. It lists key housing series IDs, distinguishing it from other FRED tools like fred_search or fred_series_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving time series data, especially housing series, but does not explicitly state when to use this tool versus alternatives like fred_series_info (which returns metadata). No exclusions or conditions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fred_releasesCInspect
Check upcoming and recent economic data releases. Returns release dates, names, and which series they update.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (1-1000, default 20) | |
| offset | No | Result offset for pagination (default 0) | |
| _apiKey | Yes | FRED API key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It states 'latest FRED data releases' and shows 'upcoming and recent' but doesn't disclose pagination behavior, rate limits, data staleness, or any side effects. Minimal disclosure beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Short and direct (two sentences), but could be slightly more specific. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 3 parameters (1 required) and no output schema. Description is sufficient for basic understanding but lacks details on return format or error handling. With no output schema, agent may benefit from knowing what fields are returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and parameters are well-documented in schema. Description adds no extra meaning beyond schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it gets FRED data releases, specifically 'upcoming and recent releases of economic data'. The verb 'get' and resource 'releases' are clear, but it doesn't differentiate from siblings like fred_category or fred_search, which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs alternatives. Sibling tools exist for categories, series info, and search, but description doesn't indicate when releases is preferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fred_searchAInspect
Search for economic data series by keyword. Returns series IDs, titles, and descriptions to identify the right indicator.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return (1-1000, default 20) | |
| _apiKey | Yes | FRED API key | |
| order_by | No | Order results by: search_rank, series_id, title, units, frequency, seasonal_adjustment, realtime_start, realtime_end, last_updated, observation_start, observation_end, popularity, group_popularity. Default: search_rank | |
| sort_order | No | Sort direction: asc or desc. Default: asc for search_rank | |
| search_text | Yes | Keywords to search for (e.g., "mortgage rate", "housing starts") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It mentions the tool is for searching by keyword, which is clear, but does not disclose limitations like pagination, rate limits, or authentication requirements beyond what the schema implies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the main purpose and provide useful examples. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 params, no output schema, no annotations), the description is complete enough for a search tool. It could mention that results include series IDs for further use with fred_get_series, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds context by providing example search terms ('mortgage rate'), but does not add significant meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Search') and resource ('FRED series'), and the examples of data types ('housing, employment, inflation') distinguish it from sibling tools like fred_get_series and fred_series_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for discovering series IDs, but does not explicitly contrast with other FRED tools (e.g., fred_category for browsing by category). No explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fred_series_infoBInspect
Get metadata for a series: title, units, frequency, seasonal adjustment, notes, and date range. Check this before fetching historical data.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | FRED API key | |
| series_id | Yes | FRED series ID (e.g., "MORTGAGE30US") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry full burden. It correctly implies read-only behavior by stating 'Get metadata'. However, does not disclose API rate limits or potential errors (e.g., invalid series_id).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence front-loads the action and lists key metadata fields concisely. No wasted words. Could be split into two sentences for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description hints at return fields but not structure (e.g., JSON format). With only 2 simple params and a clear purpose, it is mostly adequate but leaves some ambiguity about response details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters described). Description does not add new parameter info beyond schema, but schema already describes them adequately. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves metadata (title, units, etc.) for a FRED series, distinguishing it from siblings like fred_get_series (likely returns values) and fred_search (finds series).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs. fred_get_series or fred_category. No mention of prerequisites (e.g., need an API key) beyond schema. Does not specify that it's a lightweight info call.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully describe behavior. It states the tool retrieves or lists memories, which is adequate. However, it doesn't mention side effects, persistence, or limits, which would be helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the action, and contains no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple schema and no output schema, the description covers the tool's function well. It could mention return format or error handling, but for a simple memory retrieval tool, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter described. The description adds value by explaining that omitting the key lists all memories, which goes beyond the schema's description of 'omit to list all keys' by clarifying the use case.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It specifies the verb 'retrieve' and resource 'memory', and distinguishes itself from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says to use it to retrieve context saved earlier, which provides clear context. However, it does not explicitly mention when not to use it or alternatives among siblings, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses important behavioral traits: memory persistence depends on authentication (authenticated users get persistent, anonymous sessions last 24 hours). This is valuable context beyond the basic storage action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences with no wasted words. It front-loads the purpose, then usage examples, then behavioral context. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is complete for a simple key-value store tool with two well-documented parameters and no output schema. It explains purpose, usage, and persistence behavior. Could optionally mention that value is overwritten on same key, but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add additional meaning beyond the schema's descriptions of 'key' and 'value'. The examples in the schema ('subject_property', etc.) already cover semantics well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Store a key-value pair in your session memory' which clearly identifies the verb (store) and resource (session memory). It distinguishes itself from siblings like 'recall' and 'forget' by mentioning 'save' and 'session memory', though not explicitly naming the siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool ('to save intermediate findings, user preferences, or context across tool calls'), providing clear context for usage. However, it does not explicitly state when not to use it or mention alternative tools like 'recall' for retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!