Epa Emissions
Server Details
EPA Emissions MCP — wraps EPA Envirofacts REST API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-epa-emissions
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 12 of 12 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes: memory management, general query, entity resolution, and specific emission/release queries. Some overlap exists between ghg and tri tools, but descriptions clarify different scopes (sector vs facility vs chemical vs trends).
Tool names use snake_case but mix verb-first (ask_pipeworx, compare_entities, discover_tools, forget, recall, remember, resolve_entity) with noun-first (ghg_emissions_by_sector, ghg_facility_emissions, tri_chemical_releases, tri_facility_releases, tri_trends), breaking a consistent pattern.
With 12 tools, the count is well-scoped for an emissions data server. Each tool serves a clear purpose without redundancy, and the inclusion of memory and discovery tools adds value without bloating.
The tool set covers major aspects of emissions data: GHGs by sector and facility, TRI chemicals and facilities, and trends. Minor gaps exist (e.g., no direct emissions by zip code or year-range filter) but core workflows are supported.
Available Tools
13 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool dynamically selects the best data source and fills arguments, which is important behavioral information. No annotations are provided, so the description carries the full burden. It does not mention limitations, error handling, or scope constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with three sentences and a list of examples. Every sentence adds value: the first defines the core function, the second explains the automation, the third gives usage guidance. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description adequately covers what it does and how to use it. It could mention that results are returned in text form, but not necessary. Sibling tools are specific, so this tool's general nature is clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one required 'question' parameter. The description adds meaning by explaining that the question should be in plain English and that the tool will handle the rest, going beyond the schema's minimal description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool answers natural language questions by selecting the best data source and filling arguments. It gives concrete examples like 'What is the US trade deficit with China?', which differentiates it from sibling tools that are more specific (e.g., 'ghg_emissions_by_sector').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it: for any question in plain English without needing to browse tools. It implicitly contrasts with using specific tools directly, though it doesn't explicitly say when not to use it or list alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 entities side by side in one call. type="company": revenue, net income, cash, long-term debt from SEC EDGAR. type="drug": adverse-event report count, FDA approval count, active trial count. Returns paired data + pipeworx:// resource URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes data sources and specific metrics returned for each type, but does not mention authorization needs, rate limits, or read-only nature, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, front-loaded with core function, efficiently covers type-specific outputs, return format, and efficiency claim—no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with two required parameters, no output schema, and no annotations, the description explains inputs and outputs adequately, though the exact structure of 'paired data' could be more specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). The description adds significant meaning by explaining what each type returns (financial metrics for company, regulatory metrics for drug) and the format of values (tickers/CIKs vs. drug names), going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Compare 2–5 entities side by side in one call' with specific details for company and drug types, and mentions replacing 8–15 sequential agent calls, effectively distinguishing it from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use or avoid this tool, nor does it mention alternatives among siblings. It implies usage for comparing multiple entities but lacks explicit guidance on context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains the tool returns 'the most relevant tools with names and descriptions', but doesn't detail any other behavioral aspects like whether it's read-only, side effects, or authorization needs. The description is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at two sentences with no wasted words. The first sentence immediately states the core purpose, and the second provides critical usage guidance. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema, no nested objects), the description is nearly complete. It covers purpose, usage guidance, and return content. Missing minor behavioral details like pagination or error behavior, but those are low impact for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds minimal extra meaning beyond the schema—it reiterates the query parameter's purpose and notes the default and max for limit. No additional usage hints or examples beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog', and specifies the purpose: finding relevant tools among 500+ options. It distinguishes itself by telling the agent to call this FIRST, differentiating it from sibling tools that perform specific data queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on when to use it and implies it's a discovery step before using other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Does not state whether deletion is irreversible, what happens to related data, or any side effects. 'Delete' implies destructiveness but no further detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence. No wasted words, but could benefit from additional context without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple with one parameter and no output schema, so description need not be long. However, it lacks behavioral details (irreversibility, confirmation) and usage context relative to siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so schema already documents the key parameter. Description adds no extra meaning beyond 'Memory key to delete', which is redundant with schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb (Delete) and resource (stored memory by key). Distinguishes from 'recall' and 'remember' siblings by implying it is the removal action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives like 'recall' or 'remember'. Does not mention any prerequisites or conditions for deletion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ghg_emissions_by_sectorAInspect
Get greenhouse gas emissions by industry sector for a state (e.g., 'Power Plants', 'Chemicals'). Returns sector totals and breakdowns in metric tons CO2-equivalent.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20, max 100). | |
| state | Yes | Full state name (e.g., "Texas"). | |
| sector | No | Industry type filter (e.g., "Power Plants", "Petroleum and Natural Gas Systems", "Chemicals"). |
Output Schema
| Name | Required | Description |
|---|---|---|
| state | Yes | State name searched |
| sectors | Yes | Emissions aggregated by industry sector |
| facilities | Yes | Complete facility records |
| facility_count | Yes | Total number of facilities returned |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It discloses that it returns GHG emissions by sector and supports optional filtering. However, it does not mention whether the data is historical, current, or has any refresh cadence. It also doesn't specify if the tool is read-only or if any side effects exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the core purpose and optional filtering. No extraneous words. It is front-loaded with the key action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema and no annotations, the description could be more complete. It does not describe the format of the returned data (e.g., total emissions per sector, units like CO2 equivalents). It also doesn't mention pagination behavior or what happens when no results are found. However, for a straightforward data retrieval tool with three simple parameters, the description is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all three parameters have descriptions. The description adds context by stating the tool can filter by sector type and provides example values like 'Power Plants' and 'Chemicals', which go beyond the schema's description. The 'limit' parameter's default and max are also mentioned.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves greenhouse gas emissions by industry sector for a state. The verb 'Get' combined with 'greenhouse gas emissions by industry sector' specifies the resource and action. It also distinguishes from sibling tools like ghg_facility_emissions (which focuses on facilities) and tri_chemical_releases (which is a different program).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when needing sector-level GHG data for a state, with optional filtering by sector type. However, it does not explicitly state when to use this tool versus alternatives like ghg_facility_emissions or tri_chemical_releases. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ghg_facility_emissionsAInspect
Find greenhouse gas emissions from specific facilities by state and facility name. Returns location, type, and total CO2-equivalent emissions in metric tons.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20, max 100). | |
| state | Yes | Full state name (e.g., "Texas", "California"). | |
| facility_name | No | Facility name to search for (partial match using CONTAINING). |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | Number of facilities returned |
| state | Yes | State name searched |
| facilities | Yes | Array of facility records with emissions data |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the search behavior (state required, facility name optional partial match), and the output is specified as facility details and emissions. No contradictory statements. It could mention rate limits or data freshness but is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and scope, no wasted words. Efficiently conveys the tool's function and output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and 3 parameters, the description adequately covers what the tool does and returns. It mentions returns 'facility details and total GHG emissions', which is sufficient for an agent to understand the output structure. Could be improved by noting pagination or sorting, but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaning by explaining the facility_name is a partial match using CONTAINING, which is not in the schema description. It also clarifies that emissions are in 'metric tons CO2 equivalent', adding semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches GHG emissions by state and optionally facility name, returning facility details and total emissions in metric tons CO2 equivalent. This is specific and distinct from sibling tools like 'ghg_emissions_by_sector' which focuses on sector breakdowns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for state-based GHG facility searches with optional facility name filtering. While it doesn't explicitly state when not to use it or provide alternatives, the context from sibling tools helps distinguish it. No explicit exclusions or when-not guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Send feedback to the Pipeworx team. Use for bug reports, feature requests, missing data, or praise. Describe what you tried in terms of Pipeworx tools/data — do not include the end-user's prompt verbatim. Rate-limited to 5 messages per identifier per day. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries the full burden. It discloses rate limiting and that it is free. However, it does not describe what happens after sending (e.g., async, no confirmation). Still, for a feedback tool, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, efficiently front-loaded with purpose. Each sentence adds meaningful information without redundancy. Very concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 params, one nested) and lack of output schema, the description covers necessary behavioral context (rate limiting, usage instructions). Could mention that feedback is sent asynchronously, but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage. Description adds value by elaborating on enum values (bug, feature, data_gap, praise, other) and providing guidelines for the 'message' field (be specific, 1-2 sentences, 2000 chars max). Enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Send feedback to the Pipeworx team' and lists specific use cases (bug reports, feature requests, missing data, praise). It distinguishes itself from sibling tools by being the explicit feedback channel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use guidance (feedback of various types) and explicit instructions on what not to include (end-user prompt verbatim). Mentions rate limit (5 per day). Could further clarify when not to use (e.g., for questions), but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description explains key behavior: omit key to list all, provide key to retrieve specific memory. Clear about read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action and alternatives. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description could mention return format (e.g., value string or list of keys). However, behavior is straightforward and sibling tools cover related operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter described as 'Memory key to retrieve (omit to list all keys)'. Description adds clarity on omit behavior, but schema already covers meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves a memory by key or lists all memories when key is omitted. Distinguishes from 'remember' (store) and 'forget' (delete) sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states to use for retrieving context saved earlier. Could be improved by noting when not to use (e.g., prefer 'ask_pipeworx' for general queries).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so description must fully cover behavioral traits. It explains persistence differences ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which is valuable. However, it does not mention any limits (e.g., max memory size, number of keys) or potential side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with three sentences, each adding value: purpose, usage scenarios, and persistence detail. It is front-loaded with the core action. Could be slightly more compact, but no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema), the description adequately covers purpose, usage, and persistence behavior. The lack of output schema is acceptable since return values are implicit. Minor gap: no mention of overwriting behavior or maximum key length.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters ('key' and 'value'). The description adds context by listing example keys ('subject_property', 'target_ticker') and explaining value can be 'any text', but does not add significant new meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states the tool stores a key-value pair in session memory, specifying the exact resource ('session memory') and action ('store'). It clearly distinguishes from sibling tools like 'recall' and 'forget' which handle retrieval and deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides explicit use cases ('save intermediate findings, user preferences, or context across tool calls') and notes differences between authenticated and anonymous sessions. However, it does not explicitly state when not to use it or mention alternatives beyond the sibling names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Resolve an entity to canonical IDs across Pipeworx data sources in a single call. Supports type="company" (ticker/CIK/name → SEC EDGAR identity) and type="drug" (brand or generic name → RxCUI + ingredient + brand). Returns IDs and pipeworx:// resource URIs for stable citation. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the input types and output fields but lacks details on error handling, authentication needs, rate limits, or behavior for non-existent entities. The basic behavior is clear but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two brief sentences. It front-loads the main purpose and efficiently packs key details (supported version, input formats, output, and value proposition) without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description provides a decent overview of inputs and outputs. It could be more complete by describing error cases or performance implications, but for a simple lookup tool, it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers both parameters with descriptions, achieving 100% coverage. The description adds value by explaining that 'value' can be a ticker, CIK, or name, with specific examples, and that 'type' is limited to 'company' in v1. This enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves entities to canonical IDs across Pipeworx data sources, with specific input examples (ticker, CIK, name) and output fields (ticker, CIK, name, URIs). It distinguishes itself from potential sibling tools by highlighting its single-call efficiency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (instead of 2-3 lookup calls) and provides clear context. While it does not explicitly state when not to use it, the uniqueness among siblings and the clear purpose make the usage straightforward.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tri_chemical_releasesAInspect
Track toxic chemical releases by chemical name and state. Returns quantities released to air, water, and land broken down by year.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20, max 100). | |
| state | No | Two-letter state abbreviation to filter by (optional). | |
| chemical | Yes | Chemical name (e.g., "LEAD", "MERCURY", "BENZENE", "TOLUENE"). |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | Number of release records returned |
| chemical | Yes | Chemical name (uppercase) |
| releases | Yes | Array of toxic release records by chemical |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry burden. It correctly indicates filtering and returns quantities by media, which is helpful. However, it does not disclose pagination behavior (limit parameter exists but not mentioned), ordering, or any other behavioral traits beyond basic filtering. No contradiction with annotations as annotations are empty.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose and filtering, second states output. No redundancy. Front-loaded with main action. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given tool has 3 parameters (simple), no output schema, and no annotations, the description covers purpose, filtering, and output format. Lacks mention of default limit or pagination behavior, but overall sufficient for this level of complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds context for 'chemical' parameter with examples but does not add meaning beyond schema for 'state' or 'limit'. Acceptable given full schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches toxic chemical releases across facilities, with filters by chemical and optionally state, and returns media-specific quantities. This distinguishes it from siblings like tri_facility_releases (facility-focused) and tri_trends (trend-focused).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (when searching chemical releases by chemical and optionally state) and what it returns. It does not explicitly mention when not to use it or alternatives among siblings, but the sibling names (e.g., tri_facility_releases, tri_trends) provide implicit differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tri_facility_releasesAInspect
Search toxic chemical release facilities by state. Returns facility location, type, and chemicals released with quantities in pounds.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20, max 100). | |
| state | Yes | Two-letter state abbreviation (e.g., "TX", "CA"). | |
| facility_name | No | Facility name to search for (partial match). |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | Number of facilities returned |
| state | Yes | Two-letter state abbreviation searched |
| facilities | Yes | Array of TRI facility records |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It states that it returns 'facility details and released chemicals', which is helpful but vague. Does not disclose whether results are paginated, how partial matching works, or any rate limits. No contradictions found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is one sentence, concise and front-loaded with action. Every word serves a purpose. Could be slightly more structured with explicit sections, but efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, description provides basic understanding but lacks details on pagination, sorting, or error conditions. Schema covers 100% of parameters, so parameters are well-documented. Complete enough for a simple search tool, but could be more robust.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds minimal context by saying 'partial match' for facility_name, which is already in schema. However, it does not elaborate on the format of returned data beyond 'facility details and released chemicals'. Slight improvement over schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'search', resource 'TRI facilities', and scope 'by state'. It distinguishes itself from sibling tools like 'tri_chemical_releases' and 'ghg_facility_emissions' by specifying the TRI program and facility focus. However, it does not explicitly differentiate from 'tri_facility_releases' itself, but the sibling set includes distinct tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding facilities in a state, and the schema shows state is required. No explicit guidance on when to use this tool vs. alternatives like 'tri_chemical_releases' or 'tri_trends'. No exclusion criteria or when-not-to-use information provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tri_trendsAInspect
Analyze toxic release trends over time by state or chemical. Returns historical release data across years to identify patterns and changes.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No | Two-letter state abbreviation (e.g., "OH"). | |
| chemical | No | Chemical name (e.g., "LEAD"). | |
| end_year | No | End year for the trend range (default: most recent available). | |
| start_year | No | Start year for the trend range (default: 5 years ago). |
Output Schema
| Name | Required | Description |
|---|---|---|
| state | Yes | State abbreviation or null if not filtered |
| trends | Yes | Year-by-year release totals and facility counts |
| chemical | Yes | Chemical name (uppercase) or null if not filtered |
| end_year | Yes | Last year in the trend range |
| start_year | Yes | First year in the trend range |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses that the tool queries multiple years and summarizes totals, but does not specify what 'summarizes totals' means (e.g., return format, aggregation method). No contradictions with annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. First sentence states purpose and scope, second sentence adds key behavioral detail. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains that it summarizes totals over years, which is adequate for a trend tool. However, it could mention that it returns a time series or aggregate values. Slightly incomplete but sufficient for the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and descriptions are already clear. The description adds no additional meaning beyond the schema, which is acceptable given high coverage. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb (Get) and resource (toxic release trends), specifies scope (state or chemical, across reporting years), and distinguishes from sibling tools like tri_chemical_releases and tri_facility_releases which are about specific releases rather than trends.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use when the user wants trends over time for a state or chemical, but does not explicitly state when not to use it or mention alternative tools for specific queries. Sibling tools are available for specific releases, which is clear from their names, but not directly referenced.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!