Skip to main content
Glama

Nasa Power

Server Details

NASA POWER MCP — Prediction of Worldwide Energy Resources

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-nasa-power
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 4.2/5 across 14 of 14 tools scored. Lowest: 2.1/5.

Server CoherenceB
Disambiguation3/5

Many tools have overlapping purposes, e.g., ask_pipeworx can handle many tasks and subsumes others like compare_entities and entity_profile. Clarity is somewhat reduced, though detailed descriptions help distinguish specific use cases.

Naming Consistency2/5

Naming follows no consistent pattern: some are verb_noun (ask_pipeworx, compare_entities), others are noun-based (climatology, entity_profile), or single verbs (forget, recall). This mix can confuse an agent trying to predict tool names.

Tool Count4/5

With 14 tools covering diverse domains (weather, finance, memory, discovery), the count is well-scoped. The presence of a catch-all (ask_pipeworx) slightly reduces the need for additional tools, but the count remains reasonable.

Completeness3/5

The tool set covers many common tasks (company profiles, weather data, fact-checking) but relies heavily on ask_pipeworx for direct data access. Missing direct tools for specific sources like SEC filings or news, though some are integrated into other tools.

Available Tools

14 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses routing across 300+ sources and automatic argument filling. Lacks mention of error handling or behavior when no source matches, but overall gives good behavioral overview.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

All sentences add value: purpose, use-case, examples. Well-structured, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with one param and no output schema. Description covers purpose, usage, and data sources comprehensively. Could mention output format, but not essential for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema describes 'question' as natural language; description adds concrete examples ('What is the US trade deficit with China?'), adding value beyond schema. Baseline 3 due to 100% schema coverage, but examples elevate it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool answers natural-language questions by automatically selecting the right data source. Lists numerous examples and distinguishes from siblings (specific data tools) by being a meta-routing tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: when user asks certain types of questions and you don't want to pick the specific tool. Implies alternatives are the sibling tools, and provides usage examples.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

climatologyCInspect

Long-term monthly climatology averages for a coordinate.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYes
communityNo
longitudeYes
parametersNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states the tool's output type. It omits any behavioral traits such as data source, time aggregation, error handling, or required permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is too brief; it under-specifies essential details. True conciseness would front-load key info while still providing necessary context, which is absent here.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters, no output schema, and no annotations, the description is grossly incomplete. It does not cover parameter roles, return format, or any usage constraints, making it insufficient for reliable tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description fails to explain any parameters (latitude, longitude, community, parameters). No meaning is added beyond the schema field names, leaving agents guessing about required values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 'long-term monthly climatology averages for a coordinate,' which is a specific verb+resource. It differentiates from siblings like point_data and regional_data by focusing on climatology averages, but doesn't explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No information is provided about when to use this tool versus alternatives, nor any indications of prerequisites or limitations. The agent receives no guidance on appropriate contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes data sources (SEC EDGAR/XBRL for companies, FAERS/FDA/ClinicalTrials for drugs) and return type (paired data + citation URIs). No annotations provided, so description carries full burden. Lacks details on response structure or potential errors, but is adequate for basic understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise at 5-6 sentences, each adding value. Front-loaded with purpose, then use cases, data sources, return info, and efficiency note. No redundant or missing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers usage, data sources, and constraints (2-5 items). With no output schema, the return description ('paired data + citation URIs') is somewhat vague but acceptable given complexity. Sibling tools provide context for single-entity queries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters (type, values) with descriptions. The description adds significant meaning: explains how to format values based on type (tickers/CIKs for company, names for drug) and what data each type retrieves. This goes beyond the schema enum and array descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool compares 2-5 companies or drugs side by side, with specific verbs and resources. It distinguishes from sibling entity_profile by focusing on multiple entities. Use cases are listed (e.g., 'compare X and Y', 'X vs Y').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: when user says 'compare X and Y' etc., and notes it replaces 8-15 sequential calls. Does not explicitly state when not to use, but the context is clear enough for an agent to infer alternatives like entity_profile for single entities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes return type (names+descriptions) but doesn't mention if it's read-only or any side effects. No annotations provided, so description carries burden; it's mostly transparent but missing explicit safety note.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three focused sentences: purpose, usage guidance, and behavior. No wasted words, front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description adequately explains what is returned (top-N tools with names+descriptions). Complete for a discovery tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with descriptions for both 'query' and 'limit'. Description adds context about returning top-N, reinforcing limit parameter's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it finds tools by describing data/task, and returns top-N relevant tools. Distinct from sibling tools which are domain-specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (browse/search/discover) and lists example domains. Recommends calling it FIRST when many tools available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses returned data types (SEC filings, fundamentals, patents, news, LEI) and citation URIs. Could explicitly state read-only nature, but implied by 'Get'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is multi-sentence but each sentence adds unique value (purpose, examples, data sources, param guidance, alternative). Front-loaded with core purpose. Could be slightly more terse but appropriate for complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lists return types comprehensively. It covers input handling, sources, and usage context. Agent can invoke correctly without ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters are described in schema (100% coverage). Description adds: type limited to 'company' (with note on future support), value can be ticker or CIK, and guidance to use resolve_entity for names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get everything about a company in one call' and gives example phrasings. It distinguishes from sibling tools like resolve_entity (name resolution) and compare_entities (comparison).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: when user asks 'tell me about X', or when multiple tools would be needed. Also specifies when not: names not supported, use resolve_entity first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, but description clearly indicates destructive action (delete). Could mention irreversibility or no confirmation, but adequate for simple case.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with front-loaded purpose and usage guidance. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a simple tool: 1 param, no output schema, no nested objects. Description covers purpose, usage, and sibling context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameter (key) with description. Description adds no extra meaning beyond schema. Baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Delete a previously stored memory by key' with specific verb and resource. Distinguishes from siblings 'remember' and 'recall' which handle storing and retrieving.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when context is stale, the task is done, or you want to clear sensitive data...'. Also suggests pairing with remember and recall.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses rate limiting, daily human review, and roadmap impact despite lacking annotations. No contradictions. Doesn't describe after-submission behavior but that's minor for a feedback tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, efficient, front-loaded with purpose then specifics. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description covers usage limits, content expectations, and team handling. Adequately complete for a feedback tool; minor gap on response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already covers all parameters (100% coverage). Description adds value by explaining 'message' formatting guidelines and that 'context' is optional, slightly improving over schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the tool as a feedback mechanism for Pipeworx team, specifying types of feedback (bug, feature, data_gap, praise). Distinct from sibling tools that focus on data retrieval or analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (tool returns wrong data, tool missing, positive experience) and provides instructions on how to format feedback. Also mentions rate limits (5 per day) and cost (free).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

point_dataAInspect

Time-series observations for a single coordinate. Temporal granularity controlled by the dates supplied — both must be YYYYMMDD; use daily by default.

ParametersJSON Schema
NameRequiredDescriptionDefault
endYesEnd date YYYYMMDD
startYesStart date YYYYMMDD
latitudeYesLatitude in degrees (-90 to 90)
temporalNohourly | daily (default) | monthly
communityNoAG (agriculture, default) | RE (renewable energy) | SB (sustainable buildings)
longitudeYesLongitude in degrees (-180 to 180)
parametersNoComma-separated POWER parameter codes (default T2M,T2M_MAX,T2M_MIN,PRECTOTCORR,ALLSKY_SFC_SW_DWN,RH2M,WS10M)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that temporal granularity is controlled by dates and daily by default, but it does not mention data source, limitations, or what the returned observations contain (e.g., default parameters, units).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the main purpose, and contains no unnecessary words or information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters and no output schema, the description is incomplete. It does not explain what the time-series observations include (e.g., default parameter values, number of data points, handling of missing data) or how the output is structured.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining that dates control granularity and have YYYYMMDD format (though schema already describes format), but it does not mention other optional parameters like temporal, community, or parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides 'Time-series observations for a single coordinate', which is a specific verb+resource. This distinguishes it from siblings like 'regional_data' (for areas) and 'climatology' (for long-term averages).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for time-series data at a point and explains date format and default granularity, but it does not explicitly state when to use this tool over alternatives like 'regional_data' or 'climatology'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries full burden. Discloses scoping by identifier and pairing with remember/forget. Does not detail error handling or output format, but sufficient for a read-only retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with action, no redundant information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With one optional parameter, no output schema, and no annotations, the description covers usage, scoping, and pairing with sibling tools adequately. Complete for its simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already describes the key parameter completely (100% coverage). Description adds context: omitting key lists all keys, and explains the role of the key in retrieving stored information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves a value saved via remember or lists all keys, with specific verb 'retrieve' and resource 'memory value'. It distinguishes from sibling tools 'remember' and 'forget' by mentioning pairing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (look up stored context) and when not to re-derive. Mentions scoping and pairing. Slightly less explicit about not using when irrelevant or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: it fans out to multiple sources (SEC EDGAR, GDELT, USPTO) and returns structured results with citations. It does not mention side effects, but as a query tool, the read-only nature is implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: starts with purpose, provides usage examples, explains the fan-out mechanism, parameter details, and return format. Every sentence adds value, and there is no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately explains the return format (structured changes, count, URIs) and the data sources. It could mention potential limitations (e.g., max date range) or error handling, but it is sufficient for a straightforward tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, but the description adds valuable context: explains 'since' accepts ISO dates or relative shorthand with examples, specifies that 'type' only supports 'company', and clarifies 'value' accepts ticker or CIK. This enriches the schema without repetition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to retrieve recent changes for a company within a specified time window. It provides explicit use case queries ('What's happening with X?', 'any updates on Y?') and distinguishes itself from siblings by focusing on monitoring changes across multiple sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives clear examples of when to use the tool, such as monitoring updates or briefing on recent activity. It does not explicitly state when not to use it or compare to sibling tools, but the context is sufficient for typical scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

regional_dataBInspect

Bounding-box query — daily or monthly data over a rectangular region. Bbox area max ~10° × 10°.

ParametersJSON Schema
NameRequiredDescriptionDefault
endYesYYYYMMDD
startYesYYYYMMDD
temporalNodaily (default) | monthly
communityNo
parametersNo
latitude_maxYes
latitude_minYes
longitude_maxYes
longitude_minYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It mentions a bounding box area constraint but does not disclose the nature of returned data, error handling, rate limits, authentication needs, or side effects. This is insufficient for a query tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the core purpose and a key constraint. It is front-loaded and contains no superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and 9 parameters with low coverage, the description provides the essential spatial and temporal context but omits details on community/parameters and result format. It is adequate for basic usage but incomplete for complex queries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning for spatial parameters by indicating they form a bounding box, and for temporal parameters by implying they set the date range. However, with only 33% schema description coverage and no explanation for 'community' or 'parameters', it only partially compensates for schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a bounding-box query for daily or monthly data over a rectangular region, with a specific area limit. This distinguishes it from siblings like point_data or climatology by specifying spatial aggregation and temporal granularity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives, nor any conditions for use or exclusion criteria. It only states what it does, missing information about prerequisites or when to prefer other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It discloses memory persistence: authenticated users get permanent, anonymous sessions 24 hours. Does not mention deletion constraints or side effects. Adequate for a store operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each adding distinct value. Front-loaded with main purpose. No redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, description does not explain return value, but as a store operation, return is likely minimal. All essential aspects (purpose, usage, persistence) covered. Slightly missing side effects or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions. Description adds example keys and confirms value is text, but schema already provides type. No significant additional meaning beyond schema examples. Baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Save data the agent will need to reuse later', specifying verb (save) and resource (data). Distinguishes from sibling tools recall and forget by mentioning pairing. Provides scope: key-value per agent identifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when you discover something worth carrying forward... so you don't have to look it up again'. Suggests pairing with recall and forget but doesn't exclude when not to use (e.g., sensitive data). Good directional guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility. It discloses that the tool returns IDs plus citation URIs, but does not mention any behavioral aspects like side effects, rate limits, or authentication requirements. For a read-only lookup tool, this is minimally adequate but could be more transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3 sentences), starts with the core purpose, includes concrete examples, and ends with usage guidance. Every sentence adds value, and the structure is clear and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema, the description adequately explains return values (IDs plus pipeworx:// citation URIs) and covers input examples and usage order. It is complete enough for an agent to understand how and when to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description does not add new parameter details beyond what the schema already provides. The schema already explains the 'value' parameter's allowed formats. Since the schema is comprehensive, the description doesn't need to add more, but baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to look up canonical/official identifiers for companies or drugs, listing specific ID systems (CIK, ticker, RxCUI, LEI) and the contexts where it should be used. It differentiates from sibling tools by advising to use it before other tools that need official identifiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: use when a user mentions a name and you need identifiers that other tools require. It also states to use it before calling other tools and mentions it replaces 2-3 lookup calls. However, it does not explicitly mention when not to use it or provide alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the return format (verdict, structured form, actual value with citation, percent delta) and mentions it replaces multiple calls. Lacks details on rate limits or error handling but is fairly transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured. Each sentence adds value: it states purpose, provides usage examples, specifies domain, describes output, and highlights efficiency gains. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter and no output schema, the description is complete. It explains the input, output, domain constraints, and even mentions the underlying data source (SEC EDGAR + XBRL).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for the single parameter 'claim'. The description adds meaning by explaining the type of input (natural-language factual claim) and providing examples, going beyond the schema's basic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fact-check, verify, validate factual claims. It uses specific verbs and resources, and distinguishes itself from siblings by noting it replaces multiple sequential calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage guidance: when to use (checking truth of claims) and gives example queries. It also specifies the domain (company-financial claims for US public companies) but does not explicitly state when not to use or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.