Nps
Server Details
NPS MCP — US National Park Service (free key, generous limits)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-nps
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 16 of 16 tools scored. Lowest: 3.1/5.
The tool set mixes two unrelated domains (NPS parks and Pipeworx company data), and within the Pipeworx tools there is overlap between ask_pipeworx, entity_profile, recent_changes, and validate_claim, making it hard for an agent to choose the right tool.
NPS tools follow a 'list_X'/'get_X' pattern, but Pipeworx tools use varied styles like 'ask_pipeworx', 'discover_tools', 'entity_profile', and 'recent_changes'. The mix of verb_noun, noun_noun, and adjective_noun is inconsistent.
With 16 tools covering two distinct domains plus memory management, the count is borderline but not excessive. However, the server name 'Nps' suggests it should be solely NPS, making the Pipeworx tools feel out of scope.
For NPS, the tools cover parks, alerts, campgrounds, and activities reasonably well. For company data, there are many tools but some common needs like stock price are missing. The memory tools are a bonus but the overall domain is unclear.
Available Tools
16 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full responsibility. It names many data sources (SEC EDGAR, FRED, etc.) and states it picks the right tool and returns results. However, it does not disclose potential delays, failure modes, or authentication requirements, leaving some behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately long but each sentence serves a purpose: defining the tool, specifying usage, listing examples, and noting the breadth of sources. It is front-loaded with the core function. Minor redundancy could be trimmed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a general QA tool with a single parameter and no output schema, the description covers the essential aspects: purpose, usage examples, and scope of data sources. It lacks details on error handling or performance but is largely sufficient given the tool's nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'question' has a schema description that is identical to the tool description's mention ('Your question or request in natural language'). The description adds significant value by providing concrete examples of questions, enhancing the agent's understanding of valid inputs beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: answering natural-language questions by automatically selecting the right data source. It effectively distinguishes itself from sibling tools, which are more specific (e.g., list_parks, compare_entities).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises when to use the tool: when users ask questions like 'What is X?', 'Look up Y', etc., and you don't want to manually pick a specific tool. It provides implicit differentiation from siblings but lacks explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data sources (SEC EDGAR/XBRL for companies, FAERS/FDA/clinicaltrials.gov for drugs) and mentions return of paired data with citation URIs. No annotations provided, so description carries burden; lacks detail on potential side effects or rate limits, but mutability seems irrelevant.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph front-loading purpose, with specific examples and data sources. Slightly dense but efficient; could benefit from bullet points for readability, but remains concise and scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description must explain return format. Mentions 'paired data + pipeworx:// citation URIs' but is vague about structure and content of the paired data. For a compare tool with up to 5 entities, more detail on output would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds significant meaning beyond schema: explains 'type' enum with specific data fields pulled for each, and specifies value formats (tickers/CIKs vs drug names). Schema coverage is 100%, but description greatly enhances usability.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool compares 2–5 companies or drugs side-by-side, lists concrete use cases and data sources for each type, and distinguishes from sibling tools like entity_profile (single entity).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists natural language triggers ('compare X and Y', 'X vs Y', 'how do X, Y, Z stack up') and states it replaces 8-15 sequential calls, but does not explicitly mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description does not disclose behavioral traits like auth needs, side effects, or read-only nature beyond implicit search.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph but somewhat verbose with long list of domains; could be more concise while retaining key info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage timing, query format, limit details; no output schema but description is adequate for a meta-tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds value beyond schema: explains limit default/max, gives detailed query examples; schema coverage is 100% but description enriches understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states tool finds tools by describing data or task, provides specific domains, distinguishes from sibling tools which are specific entity lookups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this FIRST' when browsing among many tools, and lists when to use (browse, search, look up) with examples.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses the full scope of returned data (filings, fundamentals, patents, news, LEI) and citation format (pipeworx:// URIs). It implies read-only behavior and does not mention destructive actions, which is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that efficiently packs purpose, use cases, data contents, and parameter tips. It's not overly long, but could be broken into bullet points for readability. Still, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains all returned data categories and the citation format. It also covers prerequisites (type/value, name resolution). For a complex aggregation tool, this is comprehensive and leaves no major gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameters are 100% schema-covered, but the description adds critical context: value accepts ticker or CIK (not names), and type currently only supports 'company'. This guidance prevents misuse, especially the note to use resolve_entity for names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the purpose: 'Get everything about a company in one call.' It lists specific data returned (SEC filings, fundamentals, patents, news, LEI) and distinguishes itself from siblings like resolve_entity by describing when to use it instead of multiple separate tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage examples ('tell me about X', 'research Microsoft') and explains when it's appropriate (avoid calling 10+ tools). It also mentions an alternative (resolve_entity for names). Could be more explicit about exclusion cases, but guidance is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry behavioral burden. It states deletion but does not disclose behavior on nonexistent keys, idempotency, or undo capabilities. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first for purpose, second for usage guidance. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple (1 param, no output schema). Description covers purpose, when to use, and related tools. Could mention error handling or return value, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds 'stored memory by key' which aligns with schema. No additional format or usage details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('previously stored memory by key'). It distinguishes itself from sibling tools 'remember' and 'recall' by being the deletion counterpart.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific use cases: when context is stale, task is done, or to clear sensitive data. Also mentions pairing with 'remember' and 'recall'. Lacks explicit when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_parkAInspect
Fetch a single park by parkCode. Returns full description, designation, activities, topics, address, contacts, operating hours, entrance fees, and image gallery.
| Name | Required | Description | Default |
|---|---|---|---|
| park_code | Yes | 4-letter NPS parkCode (e.g., "yose", "grca", "yell") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations present; description adequately states it is a read operation returning data, but does not mention error handling or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently conveys purpose and output; could be slightly more structured but is concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple one-parameter tool; lists return fields sufficiently, though missing potential error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameter documentation; description adds context about return fields but not new parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool fetches a single park by parkCode and enumerates the returned fields, distinguishing it from sibling list_parks which likely returns a list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicitly indicates use when a single park's details are needed, but lacks explicit guidance on when not to use or comparison with list_parks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_alertsAInspect
Current park alerts — closures, road conditions, weather warnings, danger advisories. Filter by parkCode or state.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-500 (default 50) | |
| query | No | Free-text alert search | |
| state | No | 2-letter state code | |
| park_code | No | parkCode(s), comma-separated |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states it lists alerts with filters, but does not disclose behavioral traits like pagination, rate limits, or side effects. For a read-only list tool, this is adequate but not high.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff. The first sentence establishes purpose and context; the second specifies filter options. Every word is earned.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, and the description does not mention what fields or structure the alerts have. However, for a simple list tool with clear schema, the description provides core context but lacks detail on return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds context by listing alert types, but does not add significant semantic details beyond what the schema already provides for each parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists current park alerts and enumerates types (closures, road conditions, weather warnings, danger advisories). It mentions filtering by parkCode or state, distinguishing it from siblings like list_parks or list_campgrounds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving alerts but lacks explicit guidance on when to use versus alternatives. It provides only filtering hints without when-not or alternative tool mentions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_campgroundsBInspect
Campgrounds inside a park or state. Returns name, description, RV/tent capacity, amenities, reservation info, GPS.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-500 (default 25) | |
| query | No | Free-text query (optional) | |
| state | No | 2-letter state code (optional) | |
| park_code | No | parkCode (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations; description does not disclose behavioral traits such as pagination, idempotency, or data freshness. Merely states output fields, which is minimal for behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with scope 'Campgrounds inside a park or state', then lists return information concisely. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given moderate complexity (4 optional params), no output schema, and sibling tools, description omits details like return structure (array?), pagination, and relations to other tools. Adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes all 4 parameters (100% coverage). Description adds no extra semantic meaning beyond summarizing return fields, so meets baseline but does not enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool lists campgrounds within a park or state, and lists returned fields. Distinguishes from sibling list_parks by specifying campground-specific attributes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use vs alternatives (e.g., list_parks, get_park), nor any when-not conditions. Implicitly can be used to find campgrounds but no explicit usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_parksAInspect
List or search NPS parks. Filter by free-text query, state code (2-letter, comma-separated), or parkCode list. Returns full name, description, designation, states, GPS, address, official URL.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Results (1-500, default 25) | |
| query | No | Free-text search | |
| start | No | 0-based offset (default 0) | |
| state | No | 2-letter state code(s), comma-separated (e.g., "CA,NV") | |
| park_code | No | Specific parkCode(s), comma-separated (e.g., "yose,grca") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description does not explicitly state that the tool is read-only or safe, though 'list' implies it. Could benefit from mentioning no side effects or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and filters, followed by return fields. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks mention of pagination (limit, start) and that results are a list. No output schema, so description should cover more behavioral aspects. Adequate but has gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds minor clarification (e.g., state codes are 2-letter, comma-separated), but mostly repeats schema. No significant added meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists or searches NPS parks with specific filters and return fields. It distinguishes from siblings like get_park (single park) and list_alerts (alerts).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for listing/searching parks with filters, but does not explicitly exclude alternatives or mention when to use other tools. Context from sibling names helps, but description lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_things_to_doAInspect
Activities recommended by the park service — hikes, ranger programs, scenic drives, ranger-led activities. Filter by parkCode or state.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-500 (default 25) | |
| query | No | Free-text query (optional) | |
| state | No | 2-letter state code (optional) | |
| park_code | No | parkCode (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It does not mention that the tool is read-only, nor does it cover side effects, authentication needs, or output behavior beyond listing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the purpose. It is concise with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with 4 optional parameters and no output schema, the description provides the essential purpose and filtering hints but lacks information on return format, pagination, or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so each parameter is already documented. The description adds minimal extra meaning by connecting parkCode and state to filtering activities, but this is not substantial.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists activities (hikes, ranger programs, scenic drives) from the park service, and distinguishes itself from sibling tools like list_parks or list_alerts by specifying the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions filtering by parkCode or state, giving clear usage context. However, it does not explicitly state when to prefer this tool over alternatives such as list_campgrounds or list_alerts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that the team reads digests daily, signals affect roadmap, and the tool is rate-limited and free. Missing details about post-submission actions, but still informative.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured: purpose first, then usage guidance, then behavioral notes. No redundant sentences; every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and a simple tool, the description covers purpose, usage, constraints, and behavioral context. It could mention the result after submission (e.g., confirmation), but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. The description adds value by elaborating on enum meanings (bug, feature, etc.) and providing guidance on writing the message (specific tool, error, etc.). It also explains the context object.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: providing feedback about broken items, missing features, or praise. It lists specific categories (bug, feature, data_gap, praise) and distinguishes from sibling tools, which are primarily data retrieval tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use each feedback type and what not to do (e.g., don't paste user prompts). It also includes rate limits (5 per identifier per day) and notes that it's free and doesn't count against quota.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses scoping (by identifier) and the fact that it retrieves previously stored values, which is sufficient for a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each essential: purpose, usage context, and pairing. Front-loaded with the main action, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return value (saved value or list of keys) and scoping, but could improve by specifying the format or type of the returned value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the description adds value by explaining the behavior when the key is omitted (list all keys) and the context of retrieval from remember, beyond the schema's brief description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's action (retrieve/list) and resource (saved values/keys), and distinguishes it from siblings like remember (save) and forget (delete) by naming them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use the tool (look up prior context) and how to list all keys (omit key argument). It also pairs with remember and forget, though it doesn't specify when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description details behavior: fans out to SEC EDGAR, GDELT, USPTO in parallel; explains since format (ISO or relative); and notes return structure (changes, count, URIs). Does not discuss rate limits or what happens on empty results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is well-structured and front-loaded with purpose and examples. A bit lengthy but every sentence adds value. Could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description covers return type (structured changes, total_changes, citation URIs). Explains parallel fan-out to multiple sources. Adequate for tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage. Description adds value by explaining since format with examples and defaults, value formats (ticker or CIK), and that type is only company. Adds clarity beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'What's new with a company in the last N days/months?' with specific verb-resource (fans out to sources). It distinguishes from siblings by focusing on recent changes, but does not explicitly compare to similar tools like entity_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides multiple example queries showing when to use ('what's happening with X?', 'any updates on Y?') and suggests typical monitoring windows. Lacks explicit when-not or alternative tool guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key-value storage, scoping by identifier, persistence duration (24h anonymous, persistent for authenticated), and suggests pairing. No side effects mentioned but adequate for simple store.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph of 4 sentences, front-loaded with purpose. All sentences are informative; slight room for tightening but well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a simple key-value store: covers purpose, usage, behavioral details (persistence, scoping), and pairing with sibling tools. No missing critical info.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions for key and value. Description adds usage context but no additional parameter-level meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool saves data for reuse, with specific examples (ticker, address, preference, subject). Differentiates from siblings by mentioning recall and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use when discovering something worth carrying forward, with concrete examples. Mentions pairing with recall/forget, but lacks explicit when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the tool returns IDs and pipeworx:// citation URIs, and includes examples. However, it does not mention error handling or what happens if an entity is not found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences), front-loaded with the core purpose, and each sentence contributes meaningful information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description adequately explains return values (IDs + citation URIs). The tool is simple with two parameters, and the description covers key behavioral aspects. Missing error handling details but sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers both parameters with descriptions. The description adds value by illustrating usage patterns and providing concrete mappings (e.g., 'Apple' → AAPL). This enhances the agent's understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves entity names to official identifiers (CIK, ticker, RxCUI, LEI). It gives concrete examples and distinguishes itself from sibling tools by specifying its role as a preparatory step for other tools that require identifiers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use ('when a user mentions a name and you need the CIK...'), provides usage order ('Use this BEFORE calling other tools'), and hints at alternatives by noting it replaces multiple lookup calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses key behavioral traits: it supports company-financial claims via SEC EDGAR + XBRL, returns a verdict with structured details, and replaces multiple sequential calls. It doesn't cover all behaviors (e.g., rate limits), but it is transparent about its capabilities and limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with the core purpose. It packs useful information without verbosity, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 required parameter, no output schema, no annotations), the description covers input format, domain, and return details adequately. It is complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'claim' with 100% schema coverage. The description adds value by providing example claim formats and explaining that the tool consolidates multiple steps. This goes beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-checking, verifying, and validating natural-language claims. It provides specific examples of usage and distinguishes itself from siblings by focusing on claim validation against authoritative sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool ('Use when an agent needs to check whether something a user said is true…') and specifies the domain (company-financial claims). It doesn't explicitly state when not to use it or provide alternatives, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!