Usgs Volcano
Server Details
USGS Volcano MCP — Volcano Hazards Program HANS-public feed (no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-usgs-volcano
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 14 of 14 tools scored. Lowest: 3.4/5.
Only 3 of 14 tools are volcano-specific; the rest are general-purpose pipeworx tools for financial, drug, and corporate data. This creates ambiguity about when to use volcano tools vs. the other tools, especially since the server name implies a focused domain.
Most tool names follow a consistent verb_noun pattern with underscores (list_volcanoes, compare_entities, etc.). A few deviate (forget, pipeworx_feedback) but overall the naming is predictable.
14 tools is a reasonable count, but the server is named 'Usgs Volcano' and only ~21% of tools relate to volcanoes. The majority are general-purpose tools, making the set feel inflated and mismatched to its stated purpose.
For a USGS Volcano server, the tool surface is severely incomplete: it lacks tools for individual volcano details, eruption history, monitoring data, and alerts beyond notices. The general tools do not fill these gaps, so an agent would often hit dead ends.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It discloses that the tool routes across 300+ sources and returns the result, but lacks details on failure modes, latency, or ambiguity handling. Adequate but could improve.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Succinct and well-structured: opens with purpose, follows with usage guidelines, lists data sources, and closes with examples. Every sentence is informative without excess.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description covers purpose, usage, scope, and examples completely, leaving no significant gaps for an AI agent to resolve.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single 'question' parameter. Description adds value by providing example questions and explaining the breadth of query types, going beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers natural-language questions by automatically selecting the appropriate data source, providing specific verbs and examples ('What is X?', 'Look up Y'). It distinguishes itself from sibling tools that likely require manual tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use ('when you don't want to figure out which Pipeworx pack/tool to call') and provides concrete example questions, giving clear guidance on appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It discloses data sources (SEC EDGAR/XBRL for companies, FAERS/FDA/clinicaltrials for drugs) and return format (paired data + citation URIs). Does not mention potential side effects like rate limits, but overall adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph, front-loaded with main purpose. Contains detailed information without excessive verbosity. Could be more structured with bullet points, but still efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description must explain return values. It mentions 'paired data + citation URIs', which is sufficient though somewhat vague. Covers essential aspects for a complex comparison tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds extra context: explains enum values for type, gives examples for values, and clarifies input format beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 companies or drugs side by side. It uses specific verbs like 'compare' and identifies resources (companies/drugs). It distinguishes from sibling tools like entity_profile, which likely handles single entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage cues: 'Use when a user says "compare X and Y"' etc. Explains the two types. Lacks explicit when-not-to-use, but context with siblings implies alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool returns 'top-N most relevant tools with names + descriptions,' which is accurate and sufficient. It does not mention any destructive behavior or side effects, which is consistent with a read-only discovery operation. The description is transparent about its function and limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single coherent paragraph that front-loads the purpose ('Find tools...'). It includes a list of example domains and a usage instruction. While it is somewhat lengthy due to the examples, each sentence serves a purpose and contributes to clarity. Minor redundancy could be trimmed, but overall it is well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and the absence of an output schema, the description adequately explains what the tool does and what it returns (names and descriptions of tools). It does not detail the format of the output or mention edge cases like no results, but the examples and usage instruction provide sufficient context for an agent to use it correctly. The definition is complete for a discovery tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with descriptions for both 'query' and 'limit'. The description adds value beyond the schema by explaining that 'limit' controls the number of tools returned (top-N) and providing example queries. This helps the agent understand how to use the parameters effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find tools by describing the data or task.' It provides a comprehensive list of example domains (SEC filings, financials, etc.) and explicitly distinguishes itself from sibling tools by instructing the agent to call this first to discover available options, rather than directly performing a specific action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This gives clear when-to-use guidance. While it does not explicitly list when not to use it, the context implies that if the agent already knows the specific tool, this discovery step is unnecessary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description clearly states what the tool returns (SEC filings, financials, patents, news, LEI) and mentions pipeworx:// citation URIs. However, it omits any mention of being read-only or potential side effects. Still strong for a read-oriented tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single dense paragraph that front-loads the purpose, then covers use cases, return contents, and parameter details in logical order. Every sentence earns its place; nothing redundant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no output schema but moderate complexity (aggregates multiple data sources), the description fully enumerates what is returned and how to cite it. It also addresses the type limitation and parameter constraints, making it complete for an AI agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Both parameters have schema descriptions (100% coverage), but the tool description adds meaningful extra context: for type, it notes only 'company' is supported and future plans; for value, it explains ticker vs CIK and explicitly warns that names aren't accepted—this guides correct usage beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a clear statement: 'Get everything about a company in one call.' It then lists concrete user queries ('tell me about X', 'research Microsoft') and explicitly contrasts with sibling tools like resolve_entity, making the purpose unmistakable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use (company profile queries) and when-not-to-use (names not supported, must use resolve_entity). It also implies that for granular data you'd call individual pack tools, giving complete guidance on appropriate invocation context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It clearly states the action (delete memory by key), which implies irreversibility. Could be improved by noting behavior on missing key, but overall transparent for a simple deletion.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first defines action clearly, second provides context and usage guidance. No wasted words, front-loaded with core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description fully explains purpose, when to use, and relation to siblings. No gaps given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%. Description adds no extra meaning beyond schema's description of 'key' as 'Memory key to delete'. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Delete' and resource 'memory by key', clearly distinguishing from sibling tools like 'remember' and 'recall' which store and retrieve respectively.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: when context is stale, task is done, or clearing sensitive data. Also recommends pairing with 'remember' and 'recall', providing clear guidance on alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_elevatedAInspect
List only the volcanoes currently above Normal/Green status (i.e., elevated unrest or eruption). Smallest practical fingerprint of "what's happening right now."
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, but the description transparently indicates this is a read-only listing tool (inferred from 'list'). It discloses the filtering criterion and scope, which is adequate for a non-destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are concise, front-loaded with the main verb and resource, and every word adds value. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no output schema, and a simple filtered list purpose, the description fully covers what the tool does and its unique value among siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so the description adds no param info, but the baseline is 4 as schema coverage is 100% and there is nothing to document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it lists volcanoes above Normal/Green status, using specific verbs 'list' and 'elevated' with a clear subject. It differentiates from sibling 'list_volcanoes' by focusing on elevated unrest or eruption.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It implies usage for a quick snapshot of active volcanoes via 'smallest practical fingerprint of what's happening right now.' While it doesn't explicitly state when not to use or name alternatives, the mention of 'currently' suggests real-time monitoring context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_noticesBInspect
Recent USGS volcano notices and reports — observatory updates, VAN (Volcano Activity Notice) and VONA (Volcano Observatory Notice for Aviation) text. Filter by volcano slug.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Cap notices returned (default 50) | |
| volcano_slug | No | USGS volcano slug (e.g., "kilauea", "shishaldin"). Optional — omit for all recent. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description should disclose behavioral traits. It only mentions 'Recent' and filter capability, but no details on ordering, pagination, authentication, or whether the operation is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence, front-loading key information. It could benefit from slight structuring (e.g., bullet points) but is not overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with 2 optional parameters and no output schema, the description covers the main purpose and filter. However, it lacks details on default ordering, recency semantics, and error handling, making it minimally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description mostly reiterates the schema with 'Filter by volcano slug'. It adds minimal value beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (list/reports) and resource (USGS volcano notices), specifies types (VAN, VONA), and distinguishes from sibling 'list_volcanoes' through the mention of volcano slug filtering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching recent volcano notices, but does not explicitly state when to use or not use this tool, nor does it mention alternatives beyond the implied contrast with list_volcanoes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_volcanoesAInspect
List every US volcano monitored by USGS with current alert level (Normal/Advisory/Watch/Warning) and aviation color code (Green/Yellow/Orange/Red). Filter optionally by observatory.
| Name | Required | Description | Default |
|---|---|---|---|
| observatory | No | Observatory short code (AVO=Alaska, CVO=Cascades, YVO=Yellowstone, HVO=Hawaiian, CalVO=California). Optional. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the read-only nature ('List'), the scope (US volcanoes monitored by USGS), and the returned fields (alert level, color code). No hidden side effects are implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that efficiently conveys the tool's purpose, output, and filtering option without any unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with one optional parameter and no output schema, the description sufficiently explains the return content (alert levels and color codes). It could mention pagination, but given the likely small dataset, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter. The description adds value by listing the observatory short codes and explaining the mapping, which goes beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists US volcanoes monitored by USGS, specifying the returned fields (alert level, aviation color code) and optional filtering by observatory. This distinguishes it from siblings like 'list_elevated' and 'list_notices'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions optional filtering by observatory, implying usage context, but does not explicitly state when to use this tool versus alternatives or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full weight. It discloses behavioral traits: rate limit (5 per identifier per day), that it's free and doesn't count against quota, and that the team reads digests daily affecting roadmap. This is comprehensive for a feedback tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (~100 words) and front-loaded with purpose and use cases. Every sentence adds value: purpose, when to use, how to describe, administrative notes. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 parameters, no output schema), the description covers all aspects: purpose, usage, constraints (rate limit, quota), and expected input format. It is complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all parameters. The description adds extra guidance beyond schema: for 'message' it suggests length (1-2 sentences, 2000 chars max) and specificity; for 'type' it reiterates meaning. This adds value, justifying above the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Tell the Pipeworx team something is broken, missing, or needs to exist.' It enumerates specific use cases (bug, feature, data_gap, praise) and differentiates itself from sibling tools like ask_pipeworx by focusing on feedback rather than data queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit usage guidance is provided: 'Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise).' It also includes negative guidance ('don't paste the end-user's prompt') and constraints (rate-limited to 5 per identifier per day).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description fully discloses behavior: retrieves or lists values, scoped to identifier, pairs with remember and forget. Does not cover rate limits or response format, but is transparent about core operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each essential: function, usage examples, scoping/pairing. Front-loaded with primary action, no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Completeness is high given no output schema: explains purpose, usage, scoping, and sibling tools. Lacks return format details, but agent can infer from context. Adequate for decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good descriptions. Description adds value by clarifying the optional key parameter and its effect (omit to list all keys), and provides use context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a value saved via remember or lists all saved keys if the key argument is omitted. It specifies the verb (retrieve/list) and resource (saved values), distinguishing it from siblings like remember and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit examples of when to use (e.g., user's target ticker, address, prior research notes) and explains scoping to an identifier. Lacks explicit when-not or alternative tool mention, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool fans out to three external sources (SEC EDGAR, GDELT, USPTO) in parallel, explains input formats for 'since', and describes the return value (structured changes, count, citation URIs). It does not mention rate limits or authentication, but for a read-only aggregator this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately sized and front-loaded with purpose and examples. It packs essential information (sources, parameter format, return type) without excessive fluff. Could be slightly tighter but overall well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no output schema and three parameters, the description covers input, behavior, and output reasonably well. It explains the parallel fan-out and return format. Minor gaps: no mention of error handling or limitations, but still fairly complete given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but the description adds extra context: for 'since' it recommends '30d' or '1m' for typical monitoring; for 'value' it gives examples of tickers and CIK; for 'type' it notes only 'company' is supported. This meaningfully supplements the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving recent changes for a company, with specific example queries like 'what's happening with X?' and 'news on Apple this month'. It distinguishes itself from sibling tools like 'entity_profile' and 'compare_entities' by focusing on temporal updates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use guidance with multiple example user queries. It does not explicitly state when not to use or list alternative tools, but the examples and context make usage clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, the description discloses key behaviors: scoping by identifier, retention periods for authenticated vs anonymous users. Does not mention overwriting or limits but covers essential aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five concise sentences, each adds value: purpose, usage instructions, storage mechanism, persistence, sibling references. No redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple write tool with two parameters and no output schema, the description covers purpose, usage, storage semantics, persistence, and sibling relationships. No missing critical context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline 3. The description reinforces the purpose of key-value pairs but does not add significant new parameter-level information beyond the schema examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool saves data for reuse across conversations or sessions, specifying a key-value pair storage. It distinguishes from siblings by explicitly pairing with recall and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases (e.g., remembering resolved tickers, user preferences) and persistence details (24-hour for anonymous, persistent for authenticated). Lacks explicit 'when not to use' but examples guide appropriately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns IDs and citation URIs, lists the ID types, and gives examples. It implies a read-only lookup with no side effects, which is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. Each sentence contributes useful information, including examples and a note about replacing multiple calls. No redundant or superfluous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema and lack of output schema, the description sufficiently explains the tool's purpose, input parameters, return content (IDs and URIs), and usage context. It could mention handling edge cases like ambiguity, but overall it is complete enough for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. The tool description adds value by explaining acceptable input values (ticker, CIK, name; brand or generic name) with concrete examples, going beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Look up' and the resource 'canonical/official identifier' for companies or drugs. It distinguishes itself from sibling tools by specifying that it provides identifiers (CIK, ticker, RxCUI, LEI) needed by other tools and that it replaces multiple lookup calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use the tool ('when a user mentions a name and you need the CIK...') and provides ordering guidance ('Use this BEFORE calling other tools that need official identifiers'). No explicit when-not-to-use, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full responsibility. It details the output (verdict types, extracted form, actual value with citation, percent delta) and notes version limitations (v1 supports only company-financial claims). This sufficiently sets agent expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loading the core purpose, then providing usage guidance, version scope, and output summary in a clear, well-structured paragraph with no superfluous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no nested objects, no output schema), the description covers purpose, usage, output format, and limitations. It is sufficiently complete for an agent to select and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes the 'claim' parameter with an example. The description adds crucial context about the domain of claims it can handle (company-financial) and the underlying data source, which goes beyond the schema's generic description. This compensates for the schema's limited scope.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-checking natural-language factual claims against authoritative sources. It specifies the domain (company-financial claims for US public companies) and the data source (SEC EDGAR+XBRL), making it highly distinguishable from sibling tools like ask_pipeworx or compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage scenarios with example queries ('Is it true that…?', 'Verify the claim that…') and indicates it replaces multiple sequential calls, but does not explicitly state when not to use it or compare with specific siblings. Nonetheless, the guidance is clear and practical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!