Skip to main content
Glama

Nws

Server Details

NWS MCP — US National Weather Service (no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-nws
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 15 of 15 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but 'ask_pipeworx' acts as a catch-all that could overlap with specific data tools, causing some ambiguity. Descriptions help clarify when to use each.

Naming Consistency3/5

Naming is mixed: some tools use verb_noun (get_alerts, resolve_entity) while others use plain verbs (ask_pipeworx, forget) or nouns (entity_profile). No single pattern dominates.

Tool Count5/5

15 tools is well within the ideal range. The server covers weather and comprehensive data services without being overwhelming.

Completeness5/5

The tool set covers weather forecasting, alerts, observations, data discovery, entity resolution, validation, memory, and feedback. No obvious gaps for the intended domain.

Available Tools

15 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It explains the tool routes queries across many sources and fills arguments, but it does not disclose error handling, response format, rate limits, or potential failure modes. This is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences) and well-structured: purpose first, then usage context, then examples. No superfluous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with no output schema, the description is fairly complete, covering functionality, use cases, and examples. It lacks details on limitations or response structure, but overall is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the description adds value by providing example queries that illustrate the scope of the 'question' parameter, going beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers natural-language questions by automatically selecting the right data source, with a specific verb and resource. It distinguishes itself from sibling tools (e.g., compare_entities, discover_tools) by emphasizing it is a general-purpose query router.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists when to use the tool (e.g., 'What is X?', 'Look up Y') and that it alleviates the need to pick a specific tool. However, it does not provide when-not-to-use guidance or alternatives, which would be beneficial.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behaviors. It does so by naming data sources (SEC EDGAR/XBRL, FAERS, FDA) and output format (paired data + citation URIs). It implies a read-only, batch operation. The description could be clearer about side-effects (none), but the read-only nature is evident from the wording.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficient: a single paragraph of about 70 words. It front-loads the core purpose, provides usage triggers, breaks out the two modes with clear examples, and ends with efficiency benefits. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately explains return value structure ('paired data + citation URIs') and the data points for each entity type. It covers input validation (2-5 required) and data sources. A mention of error handling or edge cases would push it to 5, but it is complete enough for confident tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds significant value by explaining how each 'type' parameter affects data sources and output fields (e.g., revenue, net income for companies; adverse events, approvals for drugs). It also gives example values like tickers, which are not in the schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear verb and resource: 'Compare 2-5 companies (or drugs) side by side in one call.' It specifies the action, the entity types, and the count. This immediately differentiates it from single-entity tools like entity_profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit trigger phrases ('compare X and Y', 'X vs Y', etc.) and use cases (tables/rankings of financial or drug data). It notes that this tool replaces 8-15 sequential calls, implicitly steering agents away from using multiple single-entity calls. However, it does not explicitly state when not to use it (e.g., for a single entity).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavioral traits. It only states it returns relevant tool names and descriptions, but does not mention safety (e.g., read-only, no side effects) or any constraints, leaving agents uncertain about potential impacts.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is slightly long due to listing many domains, but each serves to clarify use cases. It is front-loaded and efficiently conveys purpose. Minor redundancy could be trimmed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description mentions return of names+descriptions. Missing details on pagination, sorting, or whether tools are from the current server only. Adequate for a simple list tool but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. Description adds no additional meaning beyond listing example queries; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states it finds tools by describing data/task, lists many example domains (SEC filings, financials, etc.), and distinguishes from sibling tools by advising to call it FIRST when exploring options.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description clearly says when to use (browse, search, discover tools) and provides specific examples. It advises calling it first, but does not explicitly exclude scenarios where it should not be used, missing some guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return content (SEC filings, revenue, patents, news, LEI) and mentions citation URIs (pipeworx://). It does not mention side effects, rate limits, or authentication needs, but for a read-only aggregation tool, the disclosed information is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that front-loads the core purpose immediately. It contains some redundancy in the usage examples, but overall it is efficient and well-structured for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description does a good job explaining the returned data types and sources. It covers all key aspects a user would expect. Minor gaps include no mention of pagination or response size limits, but for a summary tool this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds significant value beyond the schema: it explains that type is currently limited to 'company', that value can be ticker or zero-padded CIK, and explicitly warns that names are not supported (pointing to resolve_entity).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get everything about a company in one call' and lists specific data types (SEC filings, fundamentals, patents, news, LEI). It distinguishes from sibling tools by noting it aggregates data that would require 10+ separate pack tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides example user queries ('tell me about X', 'give me a profile of Acme', etc.) and tells when not to use (names not supported, use resolve_entity first). This clearly guides the agent on when to invoke this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It only says 'delete' but does not disclose if the deletion is permanent, what happens if the key doesn't exist, or any side effects. This lack of detail for a destructive operation limits transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no fluff. The purpose is front-loaded and every word serves a clear function, making it concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description covers the essential context: what it does and when to use it. It could mention that the key must exist, but overall it is sufficient for a minimal tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'key' described as 'Memory key to delete'. The description does not add any additional meaning beyond the schema, so it meets the baseline but does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'delete' and the resource 'memory by key'. It distinguishes itself from sibling tools 'remember' and 'recall' by naming them, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use scenarios: 'when context is stale, the task is done, or you want to clear sensitive data'. It also suggests pairing with 'remember' and 'recall', offering contextual guidance, though it doesn't explicitly state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_alertsAInspect

Active NWS watches / warnings / advisories. Filter by US state (2-letter code), point (lat,lon), severity, or status. Returns event name, severity, urgency, headline, description, affected areas, and effective/expires times.

ParametersJSON Schema
NameRequiredDescriptionDefault
areaNo2-letter US state/territory code (e.g., "CA", "TX")
eventNoRestrict to a specific event type (e.g., "Tornado Warning")
limitNoCap alerts returned (default 50, max 500)
urgencyNoImmediate | Expected | Future | Past | Unknown
latitudeNoLatitude (use with longitude for point query)
severityNoExtreme | Severe | Moderate | Minor | Unknown
longitudeNoLongitude (use with latitude for point query)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It lists return fields but doesn't mention pagination, rate limits, or whether the data is read-only. It's adequate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. Front-loaded with purpose and immediately followed by filter details and return fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description covers return fields. For a list tool with 7 optional parameters, it adequately explains filtering and output. Minor gap: doesn't mention default limit.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds context about filtering by state, point, severity, or status, but that's already in the schema. No additional parameter semantics beyond grouping.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves active NWS watches/warnings/advisories, with specific filter options. It distinguishes from sibling tools like get_forecast and get_observation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use: for active alerts and with filters. It doesn't explicitly state when not to use or list alternatives, but the scope is well-defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_forecastAInspect

Get the 7-day NWS forecast for a US lat/lon. Returns named periods (e.g., "Tonight", "Wednesday") with high/low temperature, wind, and a short text forecast. US locations only.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesUS latitude
longitudeYesUS longitude
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return structure (named periods, high/low, wind, text) and geographic restriction. Does not cover auth, rate limits, or error behavior, but for a read-only forecast tool this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. First sentence states purpose and key constraint, second sentence details return format. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description must explain return. It adequately describes named periods, key data fields (high/low, wind, text), and time horizon (7-day). No obvious missing info for a weather forecast tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with brief descriptions ('US latitude', 'US longitude'). Description reinforces 'US lat/lon' but adds no new detail beyond schema. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Get' and resource '7-day NWS forecast' for US lat/lon. Specifies return content (named periods with temp, wind, text). Distinct from siblings like get_hourly_forecast by mentioning '7-day' and 'named periods'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides constraint 'US locations only' but does not guide when to use this vs alternative siblings like get_hourly_forecast or get_observation. No explicit when-not or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hourly_forecastAInspect

Get the hourly NWS forecast for a US lat/lon (~168 hours). Useful for short-term planning, severe-weather windows, or precipitation timing.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesUS latitude
longitudeYesUS longitude
max_hoursNoCap hours returned (default 24)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full burden. Mentions hourly nature and 168-hour window, but does not disclose traits like data freshness, error handling, or rate limits. Adequate for a simple read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second gives use cases. No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completeness is adequate for a simple tool with well-documented schema. No output schema, so return format is not described, but the description gives enough context for an agent to infer the result is a list of hourly forecasts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, baseline is 3. Description adds no additional meaning beyond what the schema already provides for latitude, longitude, and max_hours.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get the hourly NWS forecast for a US lat/lon (~168 hours)'. Specific verb, resource, and scope. Distinguishes from sibling 'get_forecast' implicitly by mentioning hourly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases: 'short-term planning, severe-weather windows, or precipitation timing'. No guidance on when not to use or alternatives, but the use cases are clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_observationAInspect

Latest observation from a specific NWS weather station. Returns temperature, humidity, wind, visibility, pressure, and present-weather codes.

ParametersJSON Schema
NameRequiredDescriptionDefault
station_idYes4-character NWS / ICAO station ID (e.g., "KSFO", "KJFK", "KDEN")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description provides reasonable transparency by listing the return fields (temperature, humidity, etc.) and indicating a read operation, though it omits details on error handling or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that conveys all necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description partially compensates by naming returned fields, but lacks details on data structure or units, which would improve completeness for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not add meaning beyond the schema for the single parameter station_id, which already has a thorough description. Schema coverage is 100%, baseline applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Latest observation from a specific NWS weather station' and lists the data returned, making it easy to distinguish from sibling tools like get_forecast or get_alerts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for current weather conditions but lacks explicit when-to-use or when-not-to-use guidance, especially with siblings like get_forecast and get_hourly_forecast available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description fully covers behavioral traits: rate-limited to 5 per identifier per day, free, doesn't count against quota, and that team reads digests daily. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise with no redundant information. Front-loaded with purpose, uses clear structure, every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description provides complete context: purpose, usage guidance, behavioral traits, and parameter hints. Covers all necessary information for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions, but the description adds useful guidance (e.g., 'Be specific', '1-2 sentences typical', '2000 chars max') that aids usage beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is for providing feedback about bugs, missing features, data gaps, or praise. It lists specific types and explains each, distinguishing it from sibling tools like ask_pipeworx or discover_tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool for each feedback type (bug, feature, data_gap, praise) and what not to do (don't paste end-user prompt, describe in terms of tools/packs). Includes rate limit and quota info.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses retrieval vs. listing, scoping by identifier, and pairing with remember/forget. Missing details on error behavior (e.g., missing key).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise at 4 sentences, front-loaded with purpose. Each sentence adds value, though slight improvements could be made for tighter phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one optional param) and no output schema, the description is mostly complete. It covers purpose, usage, scoping, and pairing. Lacks return format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (key parameter with description). Tool description adds value by explaining omit behavior (list all keys) and scoping, complementing the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a value saved via 'remember' or lists all keys, distinguishing it from siblings 'remember' (save) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear context for when to use (look up stored context) and implies alternatives (remember/forget), but lacks explicit when-not-to-use or direct sibling comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It discloses parallel fan-out to SEC, GDELT, USPTO and output structure (changes, count, URIs). However, it does not explicitly state the tool is readonly or mention error handling or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately sized (~130 words) and well-structured: opening purpose, usage examples, then internal behavior and return details. It is concise and front-loaded, though could be slightly trimmed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return values and mentions citation URIs. It covers the fan-out logic and parameter formats. However, it lacks details on result limits, pagination, or error cases, but these are not critical for basic usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds significant value: for 'since', it explains both ISO and relative formats with examples; for 'value', it clarifies ticker vs. CIK; for 'type', it confirms only 'company' is supported, providing context beyond enums.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving recent changes for a company. It provides multiple user query examples and distinguishes from siblings like entity_profile by focusing on temporal changes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists use cases and example queries (e.g., 'what's happening with X?', 'news on Apple this month'). However, it does not specify when not to use the tool or mention alternatives, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description reveals persistence details (authenticated vs anonymous, 24-hour retention) and scoping by identifier, which is adequate for transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Short, front-loaded paragraph with every sentence adding value; no wasted words. Purpose, usage, and details are logically ordered.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple parameters, no output schema, and no annotations, the description covers purpose, usage, persistence, and pairing. Missing potential limits (size, number) but acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already describes both parameters fully (100% coverage). The description adds example values and reinforces meaning but does not significantly extend beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('save') and the resource ('data the agent will need to reuse later'), and distinguishes from siblings like 'recall' and 'forget' by explicitly mentioning them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear when-to-use guidance with examples ('discover something worth carrying forward'), and pairs with recall/forget, though no explicit 'when not to use' is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the return values (IDs + citation URIs) and the ID systems covered. Without annotations, it carries the full burden; it is transparent about what the tool does but does not mention error conditions or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with the core purpose. Examples are helpful. A minor reduction in redundancy (e.g., 'Replaces 2–3 lookup calls') could improve conciseness, but overall it is well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains the return format (IDs and URIs). It covers the main use case and context, though it lacks details on potential multiple matches or errors. For a lookup tool, this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters described. The description enhances understanding with examples and explains how to use the 'value' parameter for different types, adding significant value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: resolving entity names to canonical identifiers (CIK, ticker, RxCUI, LEI). It provides specific examples and distinguishes from potential similar tools by noting it replaces multiple lookup calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises using this tool before others that need official identifiers, providing clear usage context. However, it does not explicitly state when not to use it or mention alternatives among siblings like entity_profile.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses return format (verdict, extracted form, actual value with citation, percent delta) and domain limitations (v1 supports company-financial claims). No destructive behavior is implied, and the input-only nature is clear. Could mention safety or auth, but overall adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph of 5-6 sentences, front-loaded with purpose. It is concise but could benefit from scannable structure (e.g., bullet points for return values). No unnecessary content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (1 param, no output schema, no annotations), the description covers purpose, usage, domain, and return format adequately. Missing details like error handling are minor for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter 'claim' with 100% schema coverage. The description adds examples and clarifies claim scope beyond the schema's generic description, providing strong semantic guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Fact-check, verify, validate, or confirm/refute a natural-language factual claim', providing a specific verb and resource (authoritative sources). It gives example queries and distinguishes from siblings like ask_pipeworx by focusing on structured claim validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clearly says 'Use when an agent needs to check whether something a user said is true' with example phrasing. Does not explicitly list when not to use or alternatives, but the domain restriction (US company financial claims) implicitly guides usage. The note about replacing 4-6 calls adds efficiency context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.