Waqi
Server Details
WAQI MCP — World Air Quality Index (free key)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-waqi
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 15 of 15 tools scored. Lowest: 3.7/5.
Each tool has a clearly distinct purpose: ask_pipeworx for general queries, compare_entities for side-by-side comparisons, discover_tools for tool discovery, entity_profile for comprehensive company info, memory tools for storage, AQI tools for real-time air quality, resolve_entity for ID resolution, recent_changes for updates, search_stations for station lookup, validate_claim for fact-checking, and pipeworx_feedback for bug reports. No two tools overlap in functionality, ensuring an agent can uniquely select the correct tool for a given task.
Most tool names follow a verb_noun or noun_verb pattern (e.g., ask_pipeworx, compare_entities, validate_claim). There is some minor inconsistency: entity_profile is noun_noun, and recent_changes is adjective_noun, but the pattern is largely predictable. Subgroups like get_aqi_by_city/location/station are consistently named.
With 15 tools, the server is well-scoped. The count is sufficient to cover company data, AQI, memory management, and utility functions without being excessive. Each tool serves a necessary role, and the number fits within the ideal 3-15 range for focused servers.
The tool surface covers the full lifecycle for its domain: company data (entity_profile, compare_entities, recent_changes, validate_claim, resolve_entity), AQI (search_stations, get_aqi_by_city/location/station), memory (remember, recall, forget), and utility (ask_pipeworx, discover_tools, pipeworx_feedback). There are no obvious gaps—users can query, compare, validate, and manage data across all intended areas.
Available Tools
15 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavior. It explains the routing and argument filling but lacks details on limits, processing time, or how ambiguous questions are handled. The mention of '300+ other sources' adds breadth but no depth on reliability or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear opening sentence, bullet-like list of sources, and examples. However, it's slightly verbose for a one-parameter tool; the list of sources could be condensed without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and a single parameter, the description covers the core functionality and scope adequately. It includes usage scope, examples, and source list. Missing details like query size limits or timeout behavior are minor issues in this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter ('question') with a description already covering it (100% coverage). The description adds examples but no new semantic constraints. With full schema coverage, a score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: answering natural-language questions by automatically selecting the right data source. It provides a vivid list of data sources (SEC EDGAR, FRED, etc.) and concrete examples, making the purpose unmistakable and distinguishing it from sibling tools like specific lookup tools or 'discover_tools'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit when-to-use guidance: when a user asks questions like 'What is X?' and you want to avoid picking the Pipeworx tool manually. It implies this is a fallback for unfamiliar queries, but doesn't explicitly state when not to use it (e.g., for known specific tools), which is a minor gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data sources (SEC EDGAR/XBRL, FAERS) and return format (paired data with citation URIs). No annotations exist, so description carries burden; could mention non-destructive nature or rate limits, but overall adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise at ~100 words, front-loaded with core purpose, uses examples and bullet-like structure. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage triggers, data sources, return format, and even replaces multiple sequential calls. No output schema, but description sufficiently explains what's returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds significant meaning beyond schema: explains that 'type' determines data pulled (revenue/net income/cash/debt vs adverse events/approvals/trials), and 'values' require tickers/CIKs for companies or drug names for drugs. Schema has 100% coverage, but description enriches context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'compare 2–5 companies (or drugs) side by side', specifies exact data sources (SEC EDGAR/XBRL, FAERS), and distinguishes from sibling tools like entity_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly gives trigger phrases ('compare X and Y', 'X vs Y', 'how do X, Y, Z stack up') and explains when to use each type. Notes it replaces 8–15 sequential agent calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It explains that the tool returns 'top-N most relevant tools with names + descriptions,' which implies a read-only search operation. While it doesn't explicitly state non-destructive behavior or auth needs, the nature of the tool (discovery) makes these less critical. A minor gap is the lack of explanation about ranking criteria.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is remarkably concise, consisting of two well-structured sentences that front-load the core purpose. Every sentence serves a clear function: stating the action, listing domains, and providing usage guidance. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately explains the return value (tool names and descriptions). It also provides essential usage context (call first for exploration) and sufficient domain examples to cover expected use cases. The tool is simple, and the description fully equips an AI agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. The tool description adds value by providing example queries (e.g., 'analyze housing market trends') and clarifying that the query should be a natural language description, which enhances the schema's baseline. The limit parameter is well-documented in schema, so no additional info needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find tools by describing the data or task.' It specifies the action (discover), the resource (tools), and provides concrete examples of domains (SEC filings, FDA drugs, etc.), distinguishing it from sibling tools that focus on individual entities or comparisons.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs when to use this tool: 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This provides clear context and implicitly advises against using it when a single tool is already identified.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It clearly discloses the output contents (SEC filings, financials, patents, news, LEI) and mentions pipeworx:// citation URIs. It does not discuss rate limits or authorization, but as a read operation this is acceptable. The transparency is high but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences are slightly verbose but each serves a purpose: first sentence states overall function, second provides usage examples and alternatives, third details inputs. The key information is front-loaded, though minor redundancy exists ('Get everything about a company' vs detailed list).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description fully explains what the tool returns, covering all major data categories. It also addresses input constraints and fallback behavior (resolve_entity for names). Given the tool's moderate complexity (2 parameters, simple types), the description is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. The description adds valuable context: type is limited to 'company' for now, value expects ticker or zero-padded CIK, and names require resolve_entity first. This enriches the schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Get everything about a company in one call,' a specific verb+resource statement. It lists concrete data sources (SEC filings, fundamentals, patents, news, LEI) and distinguishes itself from sibling tools by positioning as a comprehensive alternative to calling multiple specialized tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use (user queries like 'tell me about X', 'brief me on Tesla') and when not to (names not supported; directs to resolve_entity). It also explains the benefit over calling 10+ pack tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It discloses the destructive nature of deleting a memory. Lacks details on permanence or error handling, but sufficient for a simple operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences without waste. First sentence states action, second provides usage context and tool pairing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete operation with one parameter and no output schema, the description covers purpose, usage context, and sibling relationships adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description merely restates the schema's parameter description ('Memory key to delete'). No additional meaning added beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it deletes a memory by key, with specific verb and resource. It distinguishes from siblings by mentioning 'remember' and 'recall' as related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists when to use: stale context, task completion, clearing sensitive data. Provides pairing guidance with 'remember' and 'recall'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_aqi_by_cityAInspect
Real-time AQI for a city. Returns AQI value, dominant pollutant, individual pollutant readings (PM2.5, PM10, O3, NO2, SO2, CO), temperature/humidity/pressure, and station info.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name (e.g., "beijing", "los-angeles", "new-delhi") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that the tool returns real-time data and lists outputs, but fails to mention any behavioral traits like caching, authentication needs, rate limits, or data recency guarantees. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is clear and to the point, listing all key outputs. Minor improvement could be breaking into multiple sentences for readability, but no wasted words earn a 4.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 param, no output schema), the description covers the return values sufficiently. It explains what the tool provides (AQI, pollutants, weather, station info), making it complete for a simple lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for the 'city' parameter. The description does not add new semantic value beyond the schema; it only restates that it's for a city and lists results. Per guidelines, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves real-time AQI for a city and lists specific returned data (AQI value, pollutants, weather, station info). It distinguishes itself from sibling tools like get_aqi_by_location and get_aqi_by_station by focusing on city name input.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description implies usage for city-based queries but does not mention when not to use it or provide context for choosing between the AQI tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_aqi_by_locationAInspect
Real-time AQI for the WAQI station nearest a lat/lon.
| Name | Required | Description | Default |
|---|---|---|---|
| latitude | Yes | Latitude | |
| longitude | Yes | Longitude |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With empty annotations, the description carries full burden. It states 'Real-time' indicating freshness, but does not disclose behavior if no station is found, distance limits, rate limits, or return format. The disclosure is minimal but not misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, highly concise sentence that conveys all essential information. No wasted words; purpose, input, and output are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 required params, no output schema, no annotations), the description is largely sufficient. It explains input and output. A minor gap is not addressing what happens if no station is found nearby.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with basic descriptions ('Latitude', 'Longitude'). The description adds the context that these are used to find the nearest station, which is a minor improvement over the schema alone. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns 'Real-time AQI for the WAQI station nearest a lat/lon', specifying the resource (AQI), the data source (WAQI), the operation (nearest station), and input (lat/lon). This uniquely distinguishes it from siblings like get_aqi_by_city or get_aqi_by_station.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when you have latitude/longitude coordinates, but provides no explicit guidance on when not to use this tool versus alternatives like get_aqi_by_city or get_aqi_by_station. No context about prerequisites or trade-offs is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_aqi_by_stationAInspect
Real-time AQI for a specific WAQI station by UID (numeric).
| Name | Required | Description | Default |
|---|---|---|---|
| station_id | Yes | WAQI station UID (returned by search_stations) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries transparency burden. States 'Real-time AQI', but does not disclose potential rate limits, error handling, or guarantee of data availability. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with essential information: tool purpose, resource, and input type. No wasted words, front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with no output schema, the description adequately covers what the tool does and how to use the parameter. Refers to sibling tool search_stations for station discovery, completing the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers station_id with description, but description adds value by clarifying it is a 'numeric UID' and linking to search_stations, enhancing understanding beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Real-time AQI for a specific WAQI station by UID (numeric)', specifying verb (get), resource (AQI for station), and identifier. Distinguishes from siblings like get_aqi_by_city and get_aqi_by_location by emphasizing the numeric UID input.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use when a numeric station UID is known, and the schema description notes the UID comes from search_stations, providing context. However, lacks explicit when-to-use or when-not-to-use guidance compared to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses rate limits (5 per identifier per day), free usage, and that it doesn't count against the tool-call quota. It also mentions the team reads digests daily, implying response time. It lacks details on side effects or confirmation, but for a feedback tool this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that is front-loaded with purpose, then usage guidelines, then behavioral notes. Every sentence adds value, and it is concise without being terse. Slightly more structure (e.g., bullet points) could improve readability, but it is already effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no output schema, and no annotations, the description covers purpose, usage, parameters, and limitations (rate limits, quota) thoroughly. It could mention whether a confirmation is returned, but overall it provides sufficient context for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description adds significant context: it explains the 'type' enum values with examples, clarifies that 'context' is optional and what it relates to, and provides length guidelines and specificity for 'message'. It also adds advice beyond the schema, like not pasting prompts.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for providing feedback (bugs, features, data gaps, praise) to the Pipeworx team, using specific verbs like 'Tell' and 'broken, missing, or needs to exist'. It distinguishes itself from sibling tools by being a feedback mechanism, not a data retrieval or analysis tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly defines when to use each feedback type (bug, feature, data_gap, praise) and provides clear do's and don'ts, such as describing issues in terms of tools/packs and not pasting end-user prompts. It also mentions rate limits and that it doesn't count against quota.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that the tool is scoped to the user's identifier and pairs with remember/forget, implying read-only behavior. It does not mention side effects, auth requirements beyond scoping, or rate limits, but the behavioral traits are reasonably covered for a simple retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is succinct (three sentences) with front-loaded core action. Each sentence serves a purpose: definition, usage examples and context, scoping and pairing. No wasted words, and structured for efficient reading.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 1 optional parameter, no output schema, and no annotations, the description covers purpose, usage context, examples, scoping, and relationship with siblings. However, it does not specify the return format (e.g., string or list structure), which is a minor gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already provides 100% coverage for the single optional 'key' parameter with a clear description. The tool description repeats that omitting the key lists all saved keys, adding no new semantic information beyond what the schema offers. Thus baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as retrieving a stored value or listing all keys, with specific verb 'Retrieve' and resource 'value previously saved via remember'. It provides concrete examples (ticker, address, notes) and differentiates from siblings 'remember' and 'forget' by stating its retrieval role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'to look up context the agent stored earlier... without re-deriving it from scratch'. It also explains the effect of omitting the key argument (list all keys). However, it does not explicitly state when not to use it or suggest alternatives beyond related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses that the tool fans out to three data sources, returns structured changes with counts and URIs, and explains the input format. It does not mention side effects or authentication, but the read-only nature is implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with four sentences that cover purpose, usage, data sources, and return format. Every sentence adds value, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's inputs, behavior (parallel fan-out), and outputs (structured changes, count, URIs). It lacks details on pagination or limits, but given the complexity and absence of output schema, it is sufficiently complete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing a baseline of 3. The description adds value by clarifying the 'since' parameter format (ISO date or relative shorthand) and 'value' parameter (ticker or CIK), which goes beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving recent changes for a company from multiple sources (SEC EDGAR, GDELT, USPTO). It provides example user queries and distinguishes itself from sibling tools by focusing on temporal updates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool with natural language examples (e.g., 'what's happening with X?') and explains parameter semantics. It does not explicitly mention when not to use it or alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, but description discloses persistence behavior (authenticated persistent, anonymous 24h). However, it does not clarify if writing to an existing key overwrites or errors, which would be helpful for a write operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: purpose, usage guidance, and scoping. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description adequately covers usage. Could mention return value (e.g., success indicator) but acceptable for a simple store tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema provides full description of key and value with examples, so description adds little beyond that. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool saves data for reuse, with specific examples (resolved ticker, target address) and distinguishes from siblings recall and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use (when discovering something worth carrying forward), mentions pairing with recall and forget, and explains scoping and persistence differences.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It describes the output (IDs and pipeworx:// citation URIs) and gives examples. It lacks detail on rate limits, authentication, or error handling, but the core behavior is well communicated for a lookup tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is five sentences, starting with the core purpose, then usage guidance, examples, workflow placement, and efficiency. No redundant information; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return format (IDs plus URIs) and provides examples. It covers essential aspects for an AI agent to use the tool correctly, though it might omit some edge cases (e.g., unsupported entity types).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds meaning by providing examples (e.g., 'Apple' → AAPL) and clarifying that 'value' can be a ticker, CIK, or name for companies, and brand or generic name for drugs. This enriches the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action: 'Look up the canonical/official identifier for a company or drug.' It specifies the ID systems (CIK, ticker, RxCUI, LEI) and gives concrete examples, distinguishing it from sibling tools by explicitly saying to use it before other tools that need official identifiers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: 'Use when a user mentions a name and you need the CIK...' and 'Use this BEFORE calling other tools that need official identifiers.' It also notes it replaces 2-3 lookup calls, helping the agent decide when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_stationsAInspect
Search stations by keyword (city/region name). Returns station UID, name, current AQI, and location.
| Name | Required | Description | Default |
|---|---|---|---|
| keyword | Yes | Search keyword |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It states inputs and outputs, implying a read operation, but does not explicitly confirm non-destructiveness, auth requirements, or response format details. For a simple search tool, this is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently states the action, parameter scope, and return fields. Every word adds value, with no redundancy or irrelevant details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description omits details like handling of no results, multiple results, sorting, or pagination. With no output schema, the agent must infer these. While functional, it lacks completeness for a robust user experience.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema describes 'keyword' as 'Search keyword' (100% coverage), but the description adds semantic value by clarifying that the keyword is a city or region name. This extra context helps the agent use the parameter correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches stations by keyword (city/region name) and specifies the returned fields (UID, name, AQI, location). This distinguishes it from siblings like get_aqi_by_city or get_aqi_by_station, which require exact matches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching by partial name, but does not explicitly guide the agent on when to use this tool versus alternatives (e.g., get_aqi_by_city for exact city, get_aqi_by_station for exact station ID). No when-not or alternative instructions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses the tool's behavior: it queries SEC EDGAR + XBRL, returns a verdict with structured data and citation, and replaces multiple sequential calls. No destructive actions are mentioned, and the output format is clearly described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and informative, covering purpose, usage, scope, and output in a single paragraph. It could be slightly more structured, but it efficiently conveys all necessary information without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single required parameter and no output schema, the description provides substantial context: the claim format, sources (SEC EDGAR + XBRL), possible verdicts, and efficiency gains. It lacks details on error handling or edge cases but is mostly complete for the intended use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter, 'claim', has 100% schema description coverage. The description reiterates the schema description without adding additional meaning, which aligns with the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: fact-check, verify, validate, or confirm/refute a natural-language factual claim against authoritative sources. It specifies the supported domain (company-financial claims) and distinguishes it from other tools by its purpose and efficiency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool: 'Use when an agent needs to check whether something a user said is true'. It also notes the scope limitation (v1 supports company-financial claims), implying when not to use it. However, it does not explicitly compare to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!