Airnow
Server Details
EPA AirNow MCP — official US real-time AQI + forecast (free key)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-airnow
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 15 of 15 tools scored.
Most tools have distinct purposes, but there is potential confusion between ask_pipeworx and discover_tools (both for information discovery) and among AQI tools (current_by_location vs current_by_zip). Descriptions are detailed enough to resolve most ambiguity.
Naming style is inconsistent: some tools follow verb_noun pattern (compare_entities, discover_tools), others use noun_preposition_noun (current_by_location, forecast_by_zip), and a few are single verbs (forget, recall, remember). This mix reduces predictability.
15 tools is a reasonable count for a multi-purpose server covering air quality, entity research, memory, and feedback. It is not excessive and each tool seems to have a distinct role, though the inclusion of memory tools may be redundant given MCP capabilities.
The tool set covers core air quality queries, entity profiling, comparison, and validation. However, dedicated tools for stock prices, news, or specific filings are missing, though the catch-all ask_pipeworx can handle them. Minor gaps exist.
Available Tools
15 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains the tool routes across 300+ sources, picks the right tool, fills arguments, and returns results. However, it does not disclose limitations, error handling, or behavior for ambiguous questions, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at five sentences, front-loaded with purpose, and well-structured. Every sentence contributes useful information with no redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (routing across many sources) and simple schema (one parameter, no output schema), the description provides sufficient context: purpose, usage, examples, and scope. It could mention the response format, but overall it is fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with one parameter 'question' described as 'Your question or request in natural language.' The description adds value by providing examples and context on how the parameter is used, going beyond the schema's minimal description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers natural-language questions by automatically selecting the appropriate data source. It distinguishes itself from sibling tools (e.g., compare_entities, current_by_location) by acting as a router, explicitly mentioning that users don't need to figure out which Pipeworx pack/tool to call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use when a user asks... and you don't want to figure out which Pipeworx pack/tool to call,' providing clear guidance on when to use this tool. It also implies alternatives (the sibling tools) for more specific queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses data sources (SEC EDGAR/XBRL for companies, FAERS for drugs) and mentions return of paired data + citation URIs. Could detail error handling or rate limits, but it's fairly transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph of 4-5 sentences. It front-loads purpose and use cases, then details data sources. Every sentence adds essential context with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 params, full schema coverage, no output schema or annotations, the description covers purpose, usage, examples, and data sources adequately. It could mention return format/shape or error handling, but is still complete for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds value by explaining the type parameter enum choices and giving examples for values (tickers for company, drug names for drug). This clarifies usage beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 companies or drugs side by side. It gives specific example user queries and differentiates from alternatives by stating it replaces many sequential calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists use-case examples like 'compare X and Y' and 'X vs Y' and explains what each type does. However, no when-not-to-use or direct alternatives are mentioned, though context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
current_by_locationAInspect
Latest observed AQI for the AirNow station nearest a lat/lon.
| Name | Required | Description | Default |
|---|---|---|---|
| latitude | Yes | US latitude | |
| longitude | Yes | US longitude | |
| distance_miles | No | Search radius (default 25, max 250) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses it returns observation data and uses nearest station. No annotations provided, and description does not cover limitations like data freshness or station availability.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise, one sentence covers purpose and method without any fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides the essential information for a simple data retrieval tool. Could specify return format, but reasonable given lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds context that the tool finds the nearest station given lat/lon. Schema already describes parameters clearly, so the extra value is moderate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns the latest observed AQI from the nearest AirNow station given latitude/longitude. Distinguishes from sibling tools like current_by_zip which uses zip code.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicitly suggests usage for location-based AQI queries by providing lat/lon, but no explicit comparison to alternatives or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
current_by_zipAInspect
Latest observed AQI for a US ZIP code. Returns one record per pollutant reported at the nearest site (typically O3 + PM2.5). Includes AQI value, category (Good / Moderate / etc.), reporting area, and timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
| zip_code | Yes | US 5-digit ZIP code | |
| distance_miles | No | Search radius (default 25, max 250) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It mentions 'nearest site,' 'typically O3 + PM2.5,' and what is included (value, category, area, timestamp). However, it does not disclose error handling, rate limits, or what happens if the ZIP code is invalid or no data exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main purpose, and every clause adds value. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with no output schema, the description covers return values (AQI, category, area, timestamp, per pollutant). It is missing info on possible multiple sites or pagination but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no additional meaning to the parameters beyond what the schema already provides (e.g., it does not elaborate on 'distance_miles' default or behavior).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns 'Latest observed AQI for a US ZIP code,' specifying the verb (returns), resource (AQI), and scope (ZIP code). It implicitly distinguishes from siblings like 'current_by_location' by focusing on ZIP codes and from 'forecast_by_zip' by focusing on current observations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a US ZIP code but does not provide explicit guidance on when to use it versus alternative tools like 'current_by_location' or 'forecast_by_zip'. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns the top-N most relevant tools with names and descriptions. While it doesn't detail error handling or rate limits, the behavior is sufficiently transparent for a discovery tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured with a clear first sentence stating the core function, followed by usage guidance and example domains. While the list of domains is lengthy, it serves as useful examples. Overall concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no output schema, two parameters) and the presence of 100% schema coverage, the description completely explains what the tool does, when to use it, and what it returns. No gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds minimal extra meaning beyond the schema: it rephrases the query parameter as 'natural language' and mentions default/max for limit. The schema already provides clear descriptions, so no significant added value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: finding tools by describing data or task. It lists numerous specific domains (SEC filings, FDA drugs, etc.) and distinguishes itself as a tool discovery mechanism rather than a direct answer tool, differentiating from siblings like entity_profile or current_by_location.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance: 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This clearly tells the agent when to use this tool and when not to, providing strategic context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description bears full burden. It discloses the tool returns multiple data types and uses citation URIs. It mentions input constraints (ticker/CIK only) and type limitation (company only). Lacks mention of rate limits or error handling, but still strong.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with main purpose and examples, then what it returns, then input details. No wasted words; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description compensates by listing returned data categories (filings, fundamentals, patents, news, LEI). Sufficient for an aggregation tool. Could be more detailed on return structure, but adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds significant meaning: explains 'type' is only 'company' currently, gives examples for 'value' (ticker or CIK), and clarifies why names aren't supported, directing to resolve_entity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get everything about a company in one call' and lists specific data sources (SEC filings, revenue, patents, news, LEI). It distinguishes from siblings like resolve_entity by specifying that names require that tool first.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases (e.g., 'tell me about X', 'what do you know about Apple') and tells when not to use (if only a name, use resolve_entity). Also mentions it replaces calling 10+ other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forecast_by_zipAInspect
AQI forecast for a US ZIP code on a given date (defaults to today). Useful for "is tomorrow ok for outdoor activity" decisions.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | YYYY-MM-DD (default today) | |
| zip_code | Yes | US 5-digit ZIP code | |
| distance_miles | No | Search radius (default 25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must inform behavior. It mentions defaults to today and returns forecast, but lacks details on data source, rate limits, or precision. Adequate for a simple read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key purpose, no fluff. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (3 params, no nested objects, no output schema), description is largely complete. It explains purpose, default behavior, and use case. Could hint at return format but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds no additional meaning beyond what schema already provides for parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it provides an AQI forecast for a US ZIP code, with a specific verb (forecast) and resource (by zip). It distinguishes itself from sibling tools like current_by_zip by being a forecast rather than current data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly mentions a use case: 'is tomorrow ok for outdoor activity' decisions. However, it does not contrast with alternatives like current_by_zip or provide when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description indicates destructive action but lacks details on side effects, return value, or error handling. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. First sentence states purpose, second provides usage guidance efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a one-parameter tool without output schema, description covers purpose and usage well. Lacks return value info but pairs with siblings for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema describes 'key' as 'Memory key to delete', and description adds 'by key', which is redundant. With 100% schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Delete' and resource 'previously stored memory by key', clearly distinguishing it from siblings 'remember' and 'recall'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (stale context, task done, clear sensitive data) and pairs with remember and recall, providing clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
observations_in_bboxAInspect
Historical AQI observations inside a bounding box for a date range. Specify pollutants as comma-separated parameter codes (e.g., "OZONE,PM25,PM10,CO,NO2,SO2"). bbox format: "minLon,minLat,maxLon,maxLat".
| Name | Required | Description | Default |
|---|---|---|---|
| bbox | Yes | minLon,minLat,maxLon,maxLat | |
| verbose | No | Include site / county / agency / full-AQS-code fields | |
| end_date | Yes | YYYY-MM-DDT23 | |
| data_type | No | A (AQI) | C (concentration) | B (both, default) | |
| parameters | No | Comma-separated pollutants (default: OZONE,PM25,PM10) | |
| start_date | Yes | YYYY-MM-DDT00 (hour-granular ISO truncated) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It does not disclose whether the tool is read-only, requires authentication, or has rate limits. The tool's name implies a query, but side effects and data freshness are not mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences: one for purpose, one for parameter formatting. No redundant words. Every sentence provides actionable information, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should outline return structure, pagination, or limitations. It lacks this, leaving gaps about result format and error conditions. The tool's 6 parameters and lack of output specification make completeness inadequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but the description adds value by providing example values and defaults for pollutants and bbox format. The data_type parameter is briefly explained. This enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves historical AQI observations within a bounding box for a date range. It specifies the verb (get observations), resource (AQI), and scope (spatial and temporal). The tool is differentiated from siblings like 'current_by_location' and 'forecast_by_zip' which handle current or forecast data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit formatting examples for parameters (e.g., comma-separated pollutants, bbox coordinates) and mentions defaults. However, it does not specify when to use this tool over alternatives or what constraints might apply (e.g., maximum bbox size).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description covers behavioral traits: rate-limited to 5 per identifier per day, free, doesn't count against tool-call quota, and team reviews daily. Could mention submission confirmation or expected response time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is well-structured and front-loaded with purpose. Length is appropriate but slightly verbose; a minor trim could improve conciseness without losing key details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description doesn't explain return values, but for a feedback tool, it's adequate. Covers all needed context: usage, constraints, and team process. Could mention confirmation behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good descriptions. The description adds value by advising to describe issues in terms of Pipeworx tools/packs and not to paste user prompts, enhancing semantic understanding beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: sending feedback about bugs, features, data gaps, or praise. It distinguishes itself from sibling tools by being a feedback channel, while others are query/action tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance on when to use each feedback type (bug, feature, etc.) and what to avoid (not pasting end-user prompts). Mentions rate limits and quota, providing clear usage boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description covers behavior: retrieval vs listing, scoping by identifier, and pairing with remember/forget. It reasonably discloses the operation's nature without hidden traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (3 sentences) with front-loaded purpose, then usage, then scoping. Every sentence adds value; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with no output schema, the description explains the return types (value or list of keys) and scoping. It could mention error cases (e.g., missing key) but is generally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema describes the key parameter fully (100% coverage). The description adds meaning by explaining the effect of omitting the key (list all) and the purpose of retrieval.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool retrieves a stored value or lists all keys, using specific verbs (retrieve/list) and resource (saved values/keys). It distinguishes from siblings 'remember' and 'forget' by mentioning pairing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use (look up context stored earlier) and mentions alternatives (pair with remember, forget). It provides scoping info but does not explicitly state when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses fan-out to SEC EDGAR, GDELT, USPTO in parallel, return of structured changes with count and URIs, and that type is limited to 'company'. Does not cover rate limits or side effects, but adequate given read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences. First sentence defines purpose and query examples, second describes fan-out, third explains parameter format and return. No redundant text, front-loaded with key info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains return structure (structured changes, count, URIs). Could be more explicit about result format, but covers essential behavior for AI agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter schema coverage is 100%, but description adds guidance: explains 'since' accepts relative shorthand and recommends '30d' for typical monitoring, which is not in schema. Value description matches schema but adds 'zero-padded CIK' example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool's purpose: 'What's new with a company in the last N days/months?' and provides example user queries. It distinguishes from siblings like entity_profile by focusing on recent changes across multiple sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly specifies when to use the tool with concrete query examples (e.g., 'what's happening with X?', 'any updates on Y?') and mentions monitoring use case. However, it does not explicitly state when not to use or name alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key details: key-value pair, scoping by identifier, persistence differences between authenticated/anon sessions, and 24-hour retention. No annotations to contradict.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each adding distinct value: purpose, when to use, storage details, and related tools. No redundant content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-param write tool with no output schema, description adequately covers storage model, scoping, and persistence. Minor gaps: return behavior not mentioned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both params with descriptions (100% coverage). Description adds example keys and clarifies value as arbitrary text, but adds minimal extra semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Save data' and identifies it as memory storage for reuse. Differentiates from siblings recall and forget by pairing them explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: 'when you discover something worth carrying forward'. Mentions pairing with recall/forget but doesn't state when NOT to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses output includes IDs and pipeworx:// citation URIs. Provides examples of returned values. No annotations, so description adds transparency about a non-destructive lookup. Lacks mention of error handling but adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences, each adding value: purpose, context, examples, output description, usage order, efficiency claim. Front-loaded and no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage, output, and integration with other tools. No output schema, but description mentions IDs and citation URIs. Benefit of examples and explicit sequencing. Could detail return format more but sufficient for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes both params (type enum, value string). Description enhances with concrete allowed formats (ticker, CIK, name for company; brand/generic for drug) and examples, clarifying usage beyond enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb+resource: 'resolve entity' with specific canonical identifiers (CIK, ticker, RxCUI, LEI). Examples differentiate from siblings like entity_profile and compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use when...' for identifier lookup and 'Use this BEFORE calling other tools that need official identifiers.' Also mentions it replaces multiple lookup calls, guiding efficient usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool returns a verdict (confirmed, approximately_correct, refuted, inconclusive, unsupported), extracted structured form, actual value with citation, and percent delta. It also notes it replaces 4–6 sequential calls. It does not cover rate limits or auth, but is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and usage, then details about supported claims, return values, and efficiency. Every sentence adds value, though it could be slightly more concise. Structure is logical and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema), the description is remarkably complete. It explains the domain, supported sources, verdict types, output structure, and the efficiency improvement. No key information is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 1 parameter with 100% description coverage. The description adds value by explaining the parameter is a natural-language factual claim and gives examples. However, the schema already includes similar description, so the added value is minimal. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'fact-check, verify, validate, or confirm/refute a natural-language factual claim'. It specifies the verb (validate/verify), resource (factual claims against authoritative sources), and distinguishes from siblings by noting it replaces multiple sequential calls. The domain is explicitly scoped to company-financial claims for v1.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context: 'Use when an agent needs to check whether something a user said is true' with examples. It explains what kind of claims are supported (company-financial via SEC EDGAR+XBRL). However, it does not explicitly state when not to use this tool or mention alternatives like ask_pipeworx or resolve_entity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!