Skip to main content
Glama

Server Details

EPA Emissions MCP — wraps EPA Envirofacts REST API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-epa-emissions
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 12 of 12 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: memory management, general query, entity resolution, and specific emission/release queries. Some overlap exists between ghg and tri tools, but descriptions clarify different scopes (sector vs facility vs chemical vs trends).

Naming Consistency3/5

Tool names use snake_case but mix verb-first (ask_pipeworx, compare_entities, discover_tools, forget, recall, remember, resolve_entity) with noun-first (ghg_emissions_by_sector, ghg_facility_emissions, tri_chemical_releases, tri_facility_releases, tri_trends), breaking a consistent pattern.

Tool Count5/5

With 12 tools, the count is well-scoped for an emissions data server. Each tool serves a clear purpose without redundancy, and the inclusion of memory and discovery tools adds value without bloating.

Completeness4/5

The tool set covers major aspects of emissions data: GHGs by sector and facility, TRI chemicals and facilities, and trends. Minor gaps exist (e.g., no direct emissions by zip code or year-range filter) but core workflows are supported.

Available Tools

13 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool dynamically selects the best data source and fills arguments, which is important behavioral information. No annotations are provided, so the description carries the full burden. It does not mention limitations, error handling, or scope constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, with three sentences and a list of examples. Every sentence adds value: the first defines the core function, the second explains the automation, the third gives usage guidance. No redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description adequately covers what it does and how to use it. It could mention that results are returned in text form, but not necessary. Sibling tools are specific, so this tool's general nature is clear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one required 'question' parameter. The description adds meaning by explaining that the question should be in plain English and that the tool will handle the rest, going beyond the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool answers natural language questions by selecting the best data source and filling arguments. It gives concrete examples like 'What is the US trade deficit with China?', which differentiates it from sibling tools that are more specific (e.g., 'ghg_emissions_by_sector').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use it: for any question in plain English without needing to browse tools. It implicitly contrasts with using specific tools directly, though it doesn't explicitly say when not to use it or list alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 entities side by side in one call. type="company": revenue, net income, cash, long-term debt from SEC EDGAR. type="drug": adverse-event report count, FDA approval count, active trial count. Returns paired data + pipeworx:// resource URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes data sources and specific metrics returned for each type, but does not mention authorization needs, rate limits, or read-only nature, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, front-loaded with core function, efficiently covers type-specific outputs, return format, and efficiency claim—no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two required parameters, no output schema, and no annotations, the description explains inputs and outputs adequately, though the exact structure of 'paired data' could be more specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). The description adds significant meaning by explaining what each type returns (financial metrics for company, regulatory metrics for drug) and the format of values (tickers/CIKs vs. drug names), going beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Compare 2–5 entities side by side in one call' with specific details for company and drug types, and mentions replacing 8–15 sequential agent calls, effectively distinguishing it from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use or avoid this tool, nor does it mention alternatives among siblings. It implies usage for comparing multiple entities but lacks explicit guidance on context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explains the tool returns 'the most relevant tools with names and descriptions', but doesn't detail any other behavioral aspects like whether it's read-only, side effects, or authorization needs. The description is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at two sentences with no wasted words. The first sentence immediately states the core purpose, and the second provides critical usage guidance. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no output schema, no nested objects), the description is nearly complete. It covers purpose, usage guidance, and return content. Missing minor behavioral details like pagination or error behavior, but those are low impact for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds minimal extra meaning beyond the schema—it reiterates the query parameter's purpose and notes the default and max for limit. No additional usage hints or examples beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog', and specifies the purpose: finding relevant tools among 500+ options. It distinguishes itself by telling the agent to call this FIRST, differentiating it from sibling tools that perform specific data queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on when to use it and implies it's a discovery step before using other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Does not state whether deletion is irreversible, what happens to related data, or any side effects. 'Delete' implies destructiveness but no further detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence. No wasted words, but could benefit from additional context without adding length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple with one parameter and no output schema, so description need not be long. However, it lacks behavioral details (irreversibility, confirmation) and usage context relative to siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so schema already documents the key parameter. Description adds no extra meaning beyond 'Memory key to delete', which is redundant with schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb (Delete) and resource (stored memory by key). Distinguishes from 'recall' and 'remember' siblings by implying it is the removal action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like 'recall' or 'remember'. Does not mention any prerequisites or conditions for deletion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ghg_emissions_by_sectorAInspect

Get greenhouse gas emissions by industry sector for a state (e.g., 'Power Plants', 'Chemicals'). Returns sector totals and breakdowns in metric tons CO2-equivalent.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20, max 100).
stateYesFull state name (e.g., "Texas").
sectorNoIndustry type filter (e.g., "Power Plants", "Petroleum and Natural Gas Systems", "Chemicals").

Output Schema

ParametersJSON Schema
NameRequiredDescription
stateYesState name searched
sectorsYesEmissions aggregated by industry sector
facilitiesYesComplete facility records
facility_countYesTotal number of facilities returned
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It discloses that it returns GHG emissions by sector and supports optional filtering. However, it does not mention whether the data is historical, current, or has any refresh cadence. It also doesn't specify if the tool is read-only or if any side effects exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the core purpose and optional filtering. No extraneous words. It is front-loaded with the key action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema and no annotations, the description could be more complete. It does not describe the format of the returned data (e.g., total emissions per sector, units like CO2 equivalents). It also doesn't mention pagination behavior or what happens when no results are found. However, for a straightforward data retrieval tool with three simple parameters, the description is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all three parameters have descriptions. The description adds context by stating the tool can filter by sector type and provides example values like 'Power Plants' and 'Chemicals', which go beyond the schema's description. The 'limit' parameter's default and max are also mentioned.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves greenhouse gas emissions by industry sector for a state. The verb 'Get' combined with 'greenhouse gas emissions by industry sector' specifies the resource and action. It also distinguishes from sibling tools like ghg_facility_emissions (which focuses on facilities) and tri_chemical_releases (which is a different program).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when needing sector-level GHG data for a state, with optional filtering by sector type. However, it does not explicitly state when to use this tool versus alternatives like ghg_facility_emissions or tri_chemical_releases. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ghg_facility_emissionsAInspect

Find greenhouse gas emissions from specific facilities by state and facility name. Returns location, type, and total CO2-equivalent emissions in metric tons.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20, max 100).
stateYesFull state name (e.g., "Texas", "California").
facility_nameNoFacility name to search for (partial match using CONTAINING).

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNumber of facilities returned
stateYesState name searched
facilitiesYesArray of facility records with emissions data
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the search behavior (state required, facility name optional partial match), and the output is specified as facility details and emissions. No contradictory statements. It could mention rate limits or data freshness but is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and scope, no wasted words. Efficiently conveys the tool's function and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and 3 parameters, the description adequately covers what the tool does and returns. It mentions returns 'facility details and total GHG emissions', which is sufficient for an agent to understand the output structure. Could be improved by noting pagination or sorting, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds meaning by explaining the facility_name is a partial match using CONTAINING, which is not in the schema description. It also clarifies that emissions are in 'metric tons CO2 equivalent', adding semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches GHG emissions by state and optionally facility name, returning facility details and total emissions in metric tons CO2 equivalent. This is specific and distinct from sibling tools like 'ghg_emissions_by_sector' which focuses on sector breakdowns.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for state-based GHG facility searches with optional facility name filtering. While it doesn't explicitly state when not to use it or provide alternatives, the context from sibling tools helps distinguish it. No explicit exclusions or when-not guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Send feedback to the Pipeworx team. Use for bug reports, feature requests, missing data, or praise. Describe what you tried in terms of Pipeworx tools/data — do not include the end-user's prompt verbatim. Rate-limited to 5 messages per identifier per day. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries the full burden. It discloses rate limiting and that it is free. However, it does not describe what happens after sending (e.g., async, no confirmation). Still, for a feedback tool, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, efficiently front-loaded with purpose. Each sentence adds meaningful information without redundancy. Very concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 params, one nested) and lack of output schema, the description covers necessary behavioral context (rate limiting, usage instructions). Could mention that feedback is sent asynchronously, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage. Description adds value by elaborating on enum values (bug, feature, data_gap, praise, other) and providing guidelines for the 'message' field (be specific, 1-2 sentences, 2000 chars max). Enhances understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Send feedback to the Pipeworx team' and lists specific use cases (bug reports, feature requests, missing data, praise). It distinguishes itself from sibling tools by being the explicit feedback channel.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear when-to-use guidance (feedback of various types) and explicit instructions on what not to include (end-user prompt verbatim). Mentions rate limit (5 per day). Could further clarify when not to use (e.g., for questions), but the context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description explains key behavior: omit key to list all, provide key to retrieve specific memory. Clear about read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action and alternatives. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description could mention return format (e.g., value string or list of keys). However, behavior is straightforward and sibling tools cover related operations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter described as 'Memory key to retrieve (omit to list all keys)'. Description adds clarity on omit behavior, but schema already covers meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves a memory by key or lists all memories when key is omitted. Distinguishes from 'remember' (store) and 'forget' (delete) sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to use for retrieving context saved earlier. Could be improved by noting when not to use (e.g., prefer 'ask_pipeworx' for general queries).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so description must fully cover behavioral traits. It explains persistence differences ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which is valuable. However, it does not mention any limits (e.g., max memory size, number of keys) or potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise with three sentences, each adding value: purpose, usage scenarios, and persistence detail. It is front-loaded with the core action. Could be slightly more compact, but no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema), the description adequately covers purpose, usage, and persistence behavior. The lack of output schema is acceptable since return values are implicit. Minor gap: no mention of overwriting behavior or maximum key length.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters ('key' and 'value'). The description adds context by listing example keys ('subject_property', 'target_ticker') and explaining value can be 'any text', but does not add significant new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states the tool stores a key-value pair in session memory, specifying the exact resource ('session memory') and action ('store'). It clearly distinguishes from sibling tools like 'recall' and 'forget' which handle retrieval and deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides explicit use cases ('save intermediate findings, user preferences, or context across tool calls') and notes differences between authenticated and anonymous sessions. However, it does not explicitly state when not to use it or mention alternatives beyond the sibling names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Resolve an entity to canonical IDs across Pipeworx data sources in a single call. Supports type="company" (ticker/CIK/name → SEC EDGAR identity) and type="drug" (brand or generic name → RxCUI + ingredient + brand). Returns IDs and pipeworx:// resource URIs for stable citation. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It discloses the input types and output fields but lacks details on error handling, authentication needs, rate limits, or behavior for non-existent entities. The basic behavior is clear but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of two brief sentences. It front-loads the main purpose and efficiently packs key details (supported version, input formats, output, and value proposition) without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description provides a decent overview of inputs and outputs. It could be more complete by describing error cases or performance implications, but for a simple lookup tool, it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers both parameters with descriptions, achieving 100% coverage. The description adds value by explaining that 'value' can be a ticker, CIK, or name, with specific examples, and that 'type' is limited to 'company' in v1. This enhances understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool resolves entities to canonical IDs across Pipeworx data sources, with specific input examples (ticker, CIK, name) and output fields (ticker, CIK, name, URIs). It distinguishes itself from potential sibling tools by highlighting its single-call efficiency.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (instead of 2-3 lookup calls) and provides clear context. While it does not explicitly state when not to use it, the uniqueness among siblings and the clear purpose make the usage straightforward.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tri_chemical_releasesAInspect

Track toxic chemical releases by chemical name and state. Returns quantities released to air, water, and land broken down by year.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20, max 100).
stateNoTwo-letter state abbreviation to filter by (optional).
chemicalYesChemical name (e.g., "LEAD", "MERCURY", "BENZENE", "TOLUENE").

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNumber of release records returned
chemicalYesChemical name (uppercase)
releasesYesArray of toxic release records by chemical
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry burden. It correctly indicates filtering and returns quantities by media, which is helpful. However, it does not disclose pagination behavior (limit parameter exists but not mentioned), ordering, or any other behavioral traits beyond basic filtering. No contradiction with annotations as annotations are empty.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose and filtering, second states output. No redundancy. Front-loaded with main action. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given tool has 3 parameters (simple), no output schema, and no annotations, the description covers purpose, filtering, and output format. Lacks mention of default limit or pagination behavior, but overall sufficient for this level of complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds context for 'chemical' parameter with examples but does not add meaning beyond schema for 'state' or 'limit'. Acceptable given full schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool searches toxic chemical releases across facilities, with filters by chemical and optionally state, and returns media-specific quantities. This distinguishes it from siblings like tri_facility_releases (facility-focused) and tri_trends (trend-focused).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when searching chemical releases by chemical and optionally state) and what it returns. It does not explicitly mention when not to use it or alternatives among siblings, but the sibling names (e.g., tri_facility_releases, tri_trends) provide implicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tri_facility_releasesAInspect

Search toxic chemical release facilities by state. Returns facility location, type, and chemicals released with quantities in pounds.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20, max 100).
stateYesTwo-letter state abbreviation (e.g., "TX", "CA").
facility_nameNoFacility name to search for (partial match).

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNumber of facilities returned
stateYesTwo-letter state abbreviation searched
facilitiesYesArray of TRI facility records
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It states that it returns 'facility details and released chemicals', which is helpful but vague. Does not disclose whether results are paginated, how partial matching works, or any rate limits. No contradictions found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is one sentence, concise and front-loaded with action. Every word serves a purpose. Could be slightly more structured with explicit sections, but efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, description provides basic understanding but lacks details on pagination, sorting, or error conditions. Schema covers 100% of parameters, so parameters are well-documented. Complete enough for a simple search tool, but could be more robust.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds minimal context by saying 'partial match' for facility_name, which is already in schema. However, it does not elaborate on the format of returned data beyond 'facility details and released chemicals'. Slight improvement over schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'search', resource 'TRI facilities', and scope 'by state'. It distinguishes itself from sibling tools like 'tri_chemical_releases' and 'ghg_facility_emissions' by specifying the TRI program and facility focus. However, it does not explicitly differentiate from 'tri_facility_releases' itself, but the sibling set includes distinct tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for finding facilities in a state, and the schema shows state is required. No explicit guidance on when to use this tool vs. alternatives like 'tri_chemical_releases' or 'tri_trends'. No exclusion criteria or when-not-to-use information provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.