Skip to main content
Glama

Meteostat

Server Details

Meteostat MCP — historical weather from 11k+ stations (no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-meteostat
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 13 of 13 tools scored. Lowest: 3.6/5.

Server CoherenceC
Disambiguation2/5

Several tools have overlapping purposes, particularly ask_pipeworx, compare_entities, entity_profile, and validate_claim, which all deal with company data. The descriptions attempt to disambiguate, but the boundaries are unclear, leading to potential misselection by an agent.

Naming Consistency2/5

Tool names follow no consistent pattern: some use verb_noun (ask_pipeworx, compare_entities), some use get_ prefix (get_daily_history, get_monthly_normals), others are single verbs (forget, recall, remember). This mix of styles makes it hard to predict tool names.

Tool Count2/5

The server name 'Meteostat' suggests a weather focus, but out of 13 tools, only 2 are weather-specific (get_daily_history, get_monthly_normals). The remaining 11 tools belong to a broader 'Pipeworx' platform, making the tool count inappropriate for the apparent weather domain.

Completeness2/5

For the weather domain, the server is severely incomplete: it lacks current conditions, forecasts, hourly data, and station search. The Pipeworx tools cover many data sources, but as a set they do not form a coherent whole, leaving obvious gaps in both weather and general data retrieval.

Available Tools

13 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It explains the tool routes to 300+ sources and fills arguments, but does not disclose potential errors, rate limits, or whether it may ask clarifying questions. This is a minor gap for a complex routing tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loads the purpose, and uses bullet-like examples without waste. Every sentence contributes value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or annotations, the description covers the tool's core function and examples but lacks details on output format, error handling, or potential ambiguity in questions. Slightly incomplete for a complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter. The description adds value beyond the schema by explaining the routing behavior and providing examples, though the schema already describes the parameter well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers natural-language questions by automatically selecting the right data source. It lists examples and a wide range of covered sources, distinguishing it from sibling tools like 'compare_entities' or 'get_daily_history' which are more specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use: when the user asks questions like 'What is X?' and the agent doesn't want to figure out the specific tool. It implies alternatives exist for known tools, providing clear usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains data sources (SEC EDGAR/XBRL, FAERS) and return type (paired data + citations) but does not discuss failure modes, rate limits, or other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with purpose and use cases. While somewhat lengthy, every sentence adds value. Could be slightly more concise but still well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains return data for each type (revenue, net income, etc.) and mentions citation URIs. Covers both entity types thoroughly, making it complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. Description adds valuable context: specifies that 'values' are tickers/CIKs for company and drug names, and explains the 'type' parameter's role. This goes beyond schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool compares 2-5 companies or drugs, listing specific data points (revenue, net income, etc.) for each type. Distinguishes from siblings by noting it replaces 8-15 sequential agent calls, indicating efficiency.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (e.g., user says 'compare X and Y', 'X vs Y', wants tables/rankings) and gives examples. Does not directly mention when not to use, but context makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states it returns 'top-N most relevant tools with names + descriptions' and implies no destructive actions. However, it does not disclose potential side effects like rate limits or whether the tool list is static. Good but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the main purpose and usage context. The list of domains is somewhat lengthy but provides concrete examples. Overall, it is efficient and well-structured, earning a 4.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return (tool names + descriptions) and provides critical usage guidance ('Call this FIRST'). It is fairly complete for a discovery tool, though it could mention if the tool list is comprehensive or if there are limits on the number of tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baselines at 3. The description adds little beyond the schema: it mentions 'natural language' for query and 'top-N' for limit, but the schema already provides clear descriptions. No significant enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find tools by describing the data or task.' It lists many specific domains (SEC filings, financials, etc.) and distinguishes itself from sibling tools by being a discovery mechanism rather than a domain-specific query tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use when you need to browse, search, look up, or discover what tools exist' and instructs 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This provides clear context for when to use it, though it could mention when not to use it more explicitly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses return types (SEC filings, fundamentals, patents, news, LEI) and citation URIs. No contradictory info. Lacks details on aggregation latency or potential restrictions, but no annotations to supplement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single paragraph with front-loaded purpose, then usage, then returns, then parameter hints. Every sentence serves a purpose; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lists all major return categories and sources, including LEI and citations. Could mention geographic scope (US-focused) but comprehensive for aggregation tool. With no output schema, this is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds significant value beyond schema: clarifies 'company' as only type, specifies zero-padded CIK, and warns names not supported. Also explains value can be ticker or CIK.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get everything about a company in one call' with concrete examples (ticker/CIK). Distinguishes from siblings like resolve_entity and compare_entities by positioning itself as an aggregator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly gives when-to-use scenarios ('tell me about X') and when-not-to (names not supported, use resolve_entity). Provides alternative tool and context for when else to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Declares deletion behavior, which implies a destructive action. No annotations exist, so the description carries the burden. It does not discuss irreversibility or side effects, but the simplicity of the operation makes this adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the action, no wasted words. Efficient and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema, no output schema, and clear sibling context, the description fully covers the necessary information for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with a clear description for the 'key' parameter. The description adds no extra semantics beyond 'by key', meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Delete a previously stored memory by key' with a specific verb and resource. Distinguishes itself from siblings 'remember' and 'recall' which store and retrieve respectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when-to-use scenarios: when context is stale, task is done, or to clear sensitive data. Also suggests pairing with 'remember' and 'recall', giving clear guidance on tool relationships.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_historyAInspect

Daily historical weather for a Meteostat station between two dates. Returns date-keyed temperature (avg/min/max), precipitation, snow, wind, pressure, sun hours. Station IDs are numeric — find them at meteostat.net (URL suffix).

ParametersJSON Schema
NameRequiredDescriptionDefault
end_dateYesYYYY-MM-DD inclusive
start_dateYesYYYY-MM-DD inclusive
station_idYesMeteostat numeric station ID (e.g., "72494")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the returned data fields but does not mention rate limits, error handling, or data completeness. It is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences, front-loaded with purpose and output, then additional station ID context. Every sentence adds value with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 3-parameter query tool with no output schema, the description adequately explains the return structure (date-keyed) and lists fields. It could benefit from mentioning potential data gaps or format details, but is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds minimal extra value by mentioning station IDs are numeric and where to find them, which is helpful but does not significantly enhance understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns daily historical weather for a Meteostat station between two dates, listing specific metrics. It is a specific verb+resource combination that distinguishes from siblings like get_monthly_normals by focusing on daily data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide when to use this tool versus alternatives such as get_monthly_normals. It lacks explicit context on appropriate usage scenarios or prerequisites, offering no guidance on selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_monthly_normalsAInspect

Monthly climate normals for a station — long-run averages of temperature, precipitation, and pressure by calendar month. Useful for "what's normal in May here" baselines.

ParametersJSON Schema
NameRequiredDescriptionDefault
station_idYesMeteostat numeric station ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavior. Indicates it is a read operation returning averages, but does not disclose period length, error handling, or authentication needs. Adequate but limited.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. First defines purpose, second gives a concrete usage example. Efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool with no output schema and no annotations, the description adequately states what it returns and provides a usage example. Could mention the underlying period for normals, but still fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema 100% coverage describes station_id as 'Meteostat numeric station ID'. Description adds no further meaning beyond schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns monthly climate normals (long-run averages) for a station, specifying variables (temperature, precipitation, pressure) and grouping by calendar month. It distinguishes from sibling tool get_daily_history which likely returns daily data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides an explicit use case ('what's normal in May here' baselines) but does not mention when not to use or alternatives like get_daily_history. Clear context but no exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully carries the burden. It discloses rate limits (5 per identifier per day), zero cost, no quota impact, and that feedback affects the roadmap. This gives the agent sufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the primary purpose in the first sentence. While it is longer than minimal, every sentence adds unique value (usage, constraints, formatting tips). No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters (including a nested object) and no output schema, the description covers purpose, usage, constraints, parameter guidance, and expected impact. No critical information is missing for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, so baseline is 3. The description adds value by specifying how to frame the message (be specific, reference tools/packs) and explains the enum values in broader context. This extra guidance justifies a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to report issues ('broken, missing, or needs to exist') to the Pipeworx team. It specifies the types of feedback (bug, feature, data_gap, praise), making it distinct from sibling tools like 'ask_pipeworx' or 'discover_tools'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly defines when to use the tool: for wrong data (bug), missing tools (feature/data_gap), or praise. It also advises against pasting user prompts and notes the rate limit, providing clear decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description fully discloses behavior. It states it's a read operation (retrieve/list), mentions scope, and pairs with other tools. No destructive behavior is implied, and it does not contradict any missing annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise—two sentences front-loading the main purpose. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity and the presence of sibling tools (remember, forget), the description is complete. It explains relationships, scope, and usage pattern effectively. No output schema exists, but the return value is implied sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers the single parameter with 100% coverage. The description adds value by explaining what happens when key is omitted (list all keys) and provides context about scoping, beyond the schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Retrieve a value... or list all saved keys'. It specifies the verb (retrieve/list) and resource (saved values/keys). It also differentiates from siblings by mentioning 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance: 'look up context the agent stored earlier... without re-deriving it from scratch'. Implies when to use alternative tools (save with remember, delete with forget). Also explains scope: 'scoped to your identifier'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully handles transparency. It discloses parallel fan-out to three sources (SEC EDGAR, GDELT, USPTO), supported since formats (ISO date or relative shorthand), and return structure (structured changes, total_changes count, citation URIs). Lacks details on limitations or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise, around 4 sentences covering purpose, usage, behavior, and parameters. No redundant information, but could benefit from slightly better structuring (e.g., separate lines for clarity).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 parameters, no output schema, no annotations) and presence of sibling tools, the description could be more complete. It omits distinction from entity_profile and doesn't describe the output structure in detail. However, it covers essential usage and parameter guidance adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptive parameter descriptions. Description adds value by explaining since parameter format more concretely (ISO vs relative, recommending '30d' or '1m'), providing example inputs for value (ticker, CIK), and clarifying type restriction ('Only company supported').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves recent changes for a company over a time window, with specific example queries. It distinguishes itself from sibling tools like entity_profile and compare_entities by focusing on temporal updates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases with example user queries ('what's happening with X?', 'any updates on Y?', etc.). Does not explicitly state when not to use, but the context is clear and covers common monitoring scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description fully discloses storage behavior: key-value pair scoped by identifier, persistence differences between authenticated (permanent) and anonymous (24h) sessions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with purpose. Each sentence adds value. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a simple key-value storage with 100% schema coverage and no output schema, the description fully covers all needed context: purpose, usage, storage behavior, and pairing with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for both parameters. Description adds examples and clarifies that value can be any text, enhancing understanding beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Save data' and the resource 'for reuse across conversations/sessions'. It distinguishes from siblings like recall and forget by specifying pairing with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance: 'when you discover something worth carrying forward'. Mentions pairing with recall and forget, but does not explicitly state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure. It states that the tool returns IDs plus 'pipeworx:// citation URIs' and notes that it 'Replaces 2–3 lookup calls,' hinting at internal aggregation. However, it does not mention limitations, error conditions (e.g., unresolved entity), or performance characteristics, which would elevate transparency further.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3-4 sentences) and well-structured: it opens with the core purpose, then provides usage guidance, concrete examples, a behavioral note about replacing multiple lookups, and a sequencing instruction. Every sentence earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema, the description adequately covers what the tool returns (identifiers and citation URIs) and which ID systems are involved. It also hints at the internal optimization (replacing 2-3 calls). It could be slightly more complete by mentioning what happens if the entity is not found or how errors are reported, but overall it is sufficient for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds value by explaining the context of the returned IDs and giving input examples (e.g., 'company: ticker (AAPL), CIK (0000320193), or name'). It clarifies the expected input formats better than the schema alone, making the tool more usable.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: look up canonical identifiers for companies or drugs. It specifies the types of IDs (CIK, ticker, RxCUI, LEI) and provides concrete examples like 'Apple' → AAPL / CIK 0000320193 and 'Ozempic' → RxCUI 1991306. This specificity and differentiation from sibling tools (e.g., entity_profile) makes the purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises when to use the tool: 'Use when a user mentions a name and you need the … ID systems that other tools require as input' and 'Use this BEFORE calling other tools that need official identifiers.' It gives strong contextual guidance, though it doesn't explicitly state when not to use the tool (e.g., if the identifier is already known).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses return verdicts (confirmed, refuted, etc.) and mentions performance (replaces 4-6 sequential calls). However, it does not discuss authentication, rate limits, or potential destructive effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, front-loaded with the core action, and contains no extraneous words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description explains return types and specialization. It is sufficient for a single-parameter tool but could mention error handling or out-of-scope claims.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'claim' parameter described. The description adds context by clarifying natural-language nature, giving example inputs, and restricting scope to company-financial claims, which enhances beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs (fact-check, verify, validate, confirm/refute) and clearly identifies the resource (natural-language factual claim against authoritative sources). It distinguishes from sibling tools by noting it replaces sequential calls and specifies scope (company-financial claims via SEC EDGAR + XBRL).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool with examples like 'Is it true that…?' and 'Verify the claim that…'. It provides context for typical use cases but does not explicitly mention when not to use or alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.