Epa Echo
Server Details
EPA ECHO MCP — wraps EPA ECHO Web Services (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-epa-echo
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 10 of 10 tools scored. Lowest: 3.4/5.
Most tools have clear, distinct purposes, especially the ECHO facility search and violation tools. However, ask_pipeworx could overlap with other tools since it claims to pick the 'right tool' automatically, which may create ambiguity about when to use it versus the specialized ECHO tools.
Tool names are mostly consistent with a 'verb_noun' pattern (e.g., echo_facility_search, echo_violations). However, ask_pipeworx, discover_tools, and the memory tools (forget, recall, remember) break the pattern slightly, mixing domains without a unified prefix.
10 tools is a reasonable count for a server covering EPA ECHO data plus memory management. It feels slightly broad (two domains: EPA data and session memory), but each tool is justified. Not overly heavy or thin.
The ECHO-related tools cover key operations: search facilities, violations, enforcement actions, compliance history. Missing update/delete operations (expected for read-only data), but no obvious gaps for querying. The memory tools provide basic CRUD. Overall adequate but not exhaustive.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains that the tool picks the right source and fills arguments automatically, which is helpful behavioral context. With no annotations provided, it carries the full burden; it effectively communicates that the tool is a query router with delegation behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at three sentences, front-loading the core purpose and immediately providing examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and a single parameter, the description is largely complete. It explains input format and behavior, but could briefly mention that results are returned as text or the range of data sources.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the only parameter 'question' is well-described in the schema. The description adds semantic meaning by explaining how the parameter is used (plain English, auto-routed), going beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it accepts a plain English question and returns an answer by selecting the best data source. It contrasts with siblings by emphasizing natural language queries and automated tool selection, distinguishing it from structured search tools like echo_facility_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides examples of appropriate queries and implies it should be used for natural language questions rather than structured tool calls. However, it does not explicitly state when not to use it or mention alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the tool's search behavior and that it returns tool names and descriptions. However, it does not mention any limitations, such as whether the search is case-sensitive, handles synonyms, or requires exact phrasing. It also doesn't indicate if the tool has any side effects or state changes, though for a search tool, this is less critical.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at three sentences, with the most important information front-loaded. The first sentence defines the core action, the second explains the output, and the third provides usage guidance. There is no unnecessary information, though the third sentence could be slightly more precise about '500+ tools' being a general scenario.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has a simple search interface with two parameters and no output schema, the description is nearly complete. It explains the purpose, usage guidance, and parameter usage. However, it doesn't mention the format of the results (e.g., list of tool names and descriptions) or any error handling (e.g., what happens if query matches nothing). For a search tool, this is acceptable but could be slightly more thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description adds context by explaining how to use the 'query' parameter (natural language description) and what the 'limit' parameter controls (max tools returned), which goes beyond the schema's basic description. The example values in the query description further clarify usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to search the Pipeworx tool catalog by describing what you need. It specifies the action ('search'), the resource ('Pipeworx tool catalog'), and the expected outcome ('returns the most relevant tools'). This distinguishes it from sibling tools, which are all about specific data queries or memory functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on priority and context, differentiating it from other tools that perform specific actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
echo_compliance_historyBInspect
Check a facility's compliance record and enforcement timeline. Returns violation status, inspection dates, quarters in violation, and enforcement actions taken.
| Name | Required | Description | Default |
|---|---|---|---|
| registry_id | Yes | EPA Registry ID (from echo_facility_search results). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It mentions returns of compliance status, violations, inspections, and enforcement actions, but does not specify if the tool is read-only, potential rate limits, or data freshness. For a data retrieval tool with no annotations, more detail is expected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the purpose and listing returned data types efficiently. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and no output schema, the description adequately covers what the tool does and what it returns. However, it lacks details on potential empty results, pagination, or filtering options that might be present in sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with one parameter (registry_id). The description adds context by noting the ID comes from echo_facility_search results, which is helpful but not essential beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves compliance and enforcement history for a specific EPA-regulated facility, listing specific data types returned (compliance status, quarters in violation, inspection dates, enforcement actions). This distinguishes it from sibling tools like echo_enforcement_actions and echo_violations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies the tool is for a specific EPA-regulated facility and mentions the required parameter (registry_id from echo_facility_search). However, it does not explicitly state when not to use it or compare with sibling tools like echo_enforcement_actions or echo_violations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
echo_enforcement_actionsAInspect
Retrieve enforcement cases against a facility. Returns action type, penalty amounts, dates, and settlement details.
| Name | Required | Description | Default |
|---|---|---|---|
| registry_id | Yes | EPA Registry ID (from echo_facility_search results). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full burden. It discloses that the tool retrieves enforcement details and penalties, but does not mention authentication needs, rate limits, or any side effects. For a read-only tool, this is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that concisely states the purpose and includes the key resource and data types. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, no output schema), the description provides essential information but lacks details on return format or pagination. For a straightforward retrieval tool, it is minimally complete but could mention if results are limited or require additional context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, explaining the single parameter 'registry_id' and its source. The description adds no further parameter details, but the schema already fully documents it. Baseline 3 is elevated to 4 because the tool has only one parameter and the schema description is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get'), identifies the resource ('enforcement case details for a facility'), and lists included data types (formal/informal actions, penalties, amounts). It clearly differentiates from siblings like echo_compliance_history and echo_violations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage after obtaining a registry_id from echo_facility_search, but does not explicitly state when to use this tool versus siblings. No alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
echo_facility_searchAInspect
Search EPA-regulated facilities by name, state, ZIP code, city, or industry code (e.g., "3211" for logging). Returns facility IDs, addresses, compliance status, and program affiliations.
| Name | Required | Description | Default |
|---|---|---|---|
| zip | No | ZIP code. | |
| city | No | City name. | |
| limit | No | Max results to return (default 20, max 100). | |
| naics | No | NAICS industry code. | |
| state | No | Two-letter state abbreviation (e.g., "CA"). | |
| facility_name | No | Facility name (partial match). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the types of data returned (registry IDs, addresses, compliance status, program affiliations) but does not reveal behavioral traits beyond what is implied. With no annotations, the description provides basic behavioral context but lacks details on rate limits, authentication needs, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys all necessary information without redundancy. It is concise and front-loaded with the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has no required parameters and no output schema, the description adequately covers the search capabilities and return types. It is complete enough for an agent to understand the tool's functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the description adds little beyond summarizing the search fields. It does not elaborate on parameter semantics or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and the resource 'EPA-regulated facilities', and lists specific search fields (name, state, ZIP, city, NAICS code) and return data (registry IDs, addresses, compliance status, program affiliations). It effectively distinguishes from sibling tools like echo_compliance_history or echo_enforcement_actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching facilities but provides no explicit guidance on when to use this tool versus alternatives like echo_search_by_violation. It does not specify that all parameters are optional or that at least one is recommended for narrowing results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
echo_search_by_violationBInspect
Find facilities in significant non-compliance, filterable by state and/or program (water, air, waste). Returns facility IDs and violation status.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return (default 20, max 100). | |
| state | No | Two-letter state abbreviation (e.g., "TX"). | |
| program | No | Program filter: "CWA", "CAA", "RCRA", or "ALL" (default "ALL"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It states the tool finds facilities in significant non-compliance and supports filtering, but does not disclose behavioral traits like data freshness, pagination, or the meaning of 'significant non-compliance'. The limit parameter is in the schema, so partial credit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence of 16 words, front-loading the purpose. It is appropriately sized but could be more informative within the same length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no output schema, and no annotations, the description is somewhat complete for filtering but lacks guidance on return format, sorting, or result interpretation. It adequately covers the search purpose but leaves gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description mentions 'state' and 'program' filters, matching the schema, but does not add extra meaning beyond the schema's own descriptions (e.g., that 'state' is a two-letter abbreviation or that 'program' uses acronyms). It adds no value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it finds facilities in significant non-compliance, specifying the resource (facilities) and action (search/find). The title echoes this, but it doesn't distinguish from sibling tools like 'echo_compliance_history' or 'echo_violations', which may overlap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions filtering by state and/or program, giving usage context. However, it does not provide explicit guidance on when to use this tool vs alternatives like 'echo_violations' or 'echo_compliance_history', nor does it mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
echo_violationsAInspect
Get violation details for a facility, filterable by program (water, air, waste). Returns violation dates, types, and current status.
| Name | Required | Description | Default |
|---|---|---|---|
| program | No | Environmental program filter: "CWA" (water), "CAA" (air), or "RCRA" (waste). Defaults to CWA. | |
| registry_id | Yes | EPA Registry ID (from echo_facility_search results). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains the filter behavior but does not disclose details like pagination, rate limits, or what constitutes a 'detailed' record. A score of 3 is adequate as it covers the main function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that front-loads the core action and lists filter options succinctly. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with two parameters, 100% schema coverage, and no output schema, the description is sufficient. It explains the purpose and filter options. It could mention the output type or source, but the sibling tools hint at ECHO data, so it's reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The tool description adds context by linking registry_id to echo_facility_search results and explaining the program parameter options (CWA, CAA, RCRA) with defaults, which goes beyond the schema's descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves violation records for a facility, with an optional filter by environmental program. This is a specific verb+resource combination that distinguishes it from sibling tools like echo_compliance_history or echo_enforcement_actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the optional filter by program, but does not explicitly state when to use this tool versus other violation-related siblings like echo_search_by_violation. However, the context suggests it is for a known facility, which is implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry the burden. Description clearly states deletion, which implies irreversibility. However, no details about side effects or confirmation, but acceptable for a simple key-based delete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no waste. Front-loaded with verb and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simplicity (1 required param, no output schema, no nested objects), description is complete enough. No need for return value details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so description adds no extra semantics beyond schema's 'Memory key to delete'. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb (delete) and resource (stored memory by key), and distinguishes from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage: use when you need to delete a memory. No explicit when-not or alternatives, but given the tool's simplicity, it's adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that memories persist across sessions ('saved earlier in the session or in previous sessions'), which is critical behavioral context not captured in any annotations. However, it does not mention whether retrieval is read-only or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the primary purpose, no wasted words. Every clause adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameterless-or-one tool with no output schema, the description fully explains the two modes of operation and the persistence behavior. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter described clearly. The description adds the dual behavior (omit key to list all) beyond what the schema's 'omit to list all keys' states, adding context about cross-session persistence.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'retrieve' and resource 'stored memory', with explicit dual functionality: single key retrieval or listing all keys when omitted. This distinguishes it from siblings like 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies when to use (retrieve context saved earlier) and implies when to omit key (to list all). However, it does not explicitly exclude use cases or mention alternatives beyond the sibling context implied by the tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses persistence behavior (authenticated users get persistent memory; anonymous sessions last 24 hours) and the nature of the operation (store/save). This is good behavioral context beyond just 'store a key-value pair.' However, it does not mention if overwriting an existing key is allowed or if there are size limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with the core purpose. The first sentence states the action, the second adds usage guidelines and persistence context. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 simple params, no output schema, no annotations), the description covers the key aspects: purpose, typical usage, and persistence behavior. It lacks mention of overwrite behavior or limits, but for a straightforward memory store this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters with examples. The description adds no additional parameter-level detail beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, specifying the resource (session memory) and verb (store). It distinguishes itself from siblings 'forget' and 'recall' by focusing on writing data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use this to save intermediate findings, user preferences, or context across tool calls,' which provides clear when-to-use guidance. It also mentions persistence differences for authenticated vs anonymous users, but does not explicitly say when not to use it or compare to alternatives beyond the implicit contrast with recall/forget.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!