Skip to main content
Glama

Pvgis

Server Details

PVGIS MCP — EU Joint Research Centre PV system modeler

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-pvgis
GitHub Stars
0
Server Listing
mcp-pvgis

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 4.1/5 across 14 of 14 tools scored. Lowest: 1.5/5.

Server CoherenceB
Disambiguation3/5

Tools like ask_pipeworx and compare_entities, validate_claim, entity_profile have overlapping capabilities since ask_pipeworx is a catch-all query tool, but descriptions clarify intended use cases. Still, an agent might be unsure whether to use ask_pipeworx or a more specific tool for a given task.

Naming Consistency3/5

Tool names use a mix of verb_noun (ask_pipeworx, compare_entities, resolve_entity) and noun_only (monthly_radiation, tmy) patterns. Some are compound phrases (recent_changes). There is no uniform convention, which reduces predictability.

Tool Count4/5

14 tools is a reasonable number for a server covering multiple domains (solar data, general data query, auxiliary ops). It's not too few or too many, though the inclusion of generic memory tools might feel slightly extraneous.

Completeness3/5

The solar domain is fairly complete with radiation, PV performance, and TMY data. The data query side has many tools but is fronted by ask_pipeworx, which covers many sources. However, there is no tool for editing/deleting data, and the memory tools are generic. Overall coverage is good but not exhaustive.

Available Tools

14 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the core behavior: auto-selecting the tool, filling arguments, returning results. It could mention potential limitations (e.g., no side effects, rate limits) but is sufficiently transparent for a query tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with purpose and usage. It includes examples and a list of data sources. It could be slightly more concise, but every sentence is informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple query tool with one parameter and no output schema, the description provides enough context: what it does, when to use it, and examples. It doesn't detail return format or error handling, but given the tool's simplicity, it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for the single 'question' parameter with a brief description. The tool's description adds value by giving example questions and clarifying the natural language style, enriching beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers natural-language questions by automatically picking the right data source. It distinguishes from sibling tools by emphasizing it routes across 300+ sources, so the agent doesn't need to select a specific pack/tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when a user asks…' and provides concrete examples (e.g., 'What is X?', 'Find Z'). It implies when not to use—when you already know which specific Pipeworx tool to call. This is clear and helpful for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full behavioral burden. It discloses data sources (SEC EDGAR/XBRL for companies, FAERS for drugs), specific fields retrieved, and output format (paired data with citation URIs). It also notes efficiency gains, but lacks details on error handling or idempotency. Overall, good transparency given the constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single dense paragraph of about 100 words, front-loaded with the main purpose. It covers use cases, data sources, and return format efficiently without wasted words. Minor improvement could be better structuring for readability, but it is concise and informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (two entity types, different data sources, paired citations), the description provides sufficient context. It explains what data is pulled for each type and mentions citation URIs. However, it omits potential limitations (e.g., geographic scope, output structure details). No output schema exists, but the description compensates moderately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already defines 'type' and 'values' with min/max and enum. The description adds value by explaining the distinction between company and drug types and the data each retrieves, but this is largely contextual rather than essential parameter semantics. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: comparing 2–5 companies or drugs side by side. It lists specific use-case phrases and distinguishes between company and drug types with different data sources. It also highlights that it replaces multiple sequential calls, reinforcing its purpose against sibling 'entity_profile'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly specifies when to use the tool, including user queries like 'compare X and Y' and 'X vs Y'. It differentiates between company and drug types. However, it does not explicitly mention when not to use it or name alternative tools, though the context implies it is preferred over multiple single-entity calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description discloses that the tool returns the top-N most relevant tools with names and descriptions. It does not contradict any annotations (none provided) and no further behavioral details are needed given the simple search nature. Could mention pagination or error handling, but current level is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with purpose, concise (2 sentences), and includes a list of domains which is information-dense without being verbose. Every part of the description earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and that no output schema is provided, the description covers purpose, when to use, and parameter details. It could mention what happens if no results are found, but that is a minor gap. Overall, it is complete enough for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (query and limit described). The description adds substantial meaning beyond the schema: query is described as a 'Natural language description' with examples, and limit has a default and max. This fully clarifies parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description begins with a clear verb+resource: 'Find tools by describing the data or task.' It lists many specific domains (SEC filings, financials, etc.) and distinguishes itself from sibling tools by stating it is for browsing/searching/discovering, while siblings are specific tools for specific tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use when you need to browse, search, look up, or discover what tools exist...' and provides a clear directive: 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' This guides the agent on prioritization and context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It specifies the tool returns recent SEC filings, fundamentals, patents, news, and LEI with citation URIs. However, it does not explicitly state it is read-only, mention rate limits, or clarify how recent 'recent' is.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single dense paragraph but front-loads the purpose and usage. It is efficient but could benefit from better structuring into clear sections for improved readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's aggregated output well, including specific data types and citation format. However, it lacks details on pagination, limits, or whether all data sources are always included, which would make it more complete for this complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3. The description adds value by clarifying that 'names not supported' and directing to resolve_entity, which goes beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get everything about a company in one call.' It lists specific data sources (SEC filings, fundamentals, patents, news, LEI) and distinguishes from siblings like resolve_entity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit usage guidance: 'Use when a user asks "tell me about X"...' and provides example queries. Also notes when not to use: 'Names not supported — use resolve_entity first,' directly referencing an alternative sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states the deletion action but lacks detail on error handling, permission requirements, or whether deletion is permanent. Without annotations, more behavioral context would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences: one for purpose, one for usage. No redundant words, highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool without output schema, the description is nearly complete. Could mention return value or success behavior, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description doesn't add information beyond the schema's 'Memory key to delete.' Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Delete a previously stored memory by key,' which is a specific verb and resource. It clearly distinguishes from siblings like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use scenarios: 'when context is stale, the task is done, or you want to clear sensitive data.' Also references related tools 'remember' and 'recall' for pairing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monthly_radiationBInspect

Long-term monthly average global / diffuse / direct irradiation on horizontal / inclined plane.

ParametersJSON Schema
NameRequiredDescriptionDefault
angleNoSpecific tilt angle (default optimum)
optradNoInclude optimum-angle irradiation (default true)
endyearNoLatest year
horirradNoInclude horizontal irradiation (default true)
latitudeYes
longitudeYes
startyearNoEarliest year, defaults to climate database start
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty. The description does not disclose behavioral traits such as data source, accuracy, rate limits, or side effects. It implies a read operation but does not state it.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key purpose. No wasted words, but could benefit from more structure (e.g., bullet points for output types).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description is brief. Does not explain return format, units, or how to interpret the monthly averages. Insufficient for a tool with 7 parameters and complex outputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 71%. The description adds 'global / diffuse / direct irradiation' which are not in parameter descriptions, but does not explain parameters like 'angle' or 'optrad'. Minimal added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Long-term monthly average global / diffuse / direct irradiation on horizontal / inclined plane', specifying the resource, verb, and scope. It distinguishes from siblings like 'tmy' and 'pv_performance'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage context provided; no mention of when to use this tool vs alternatives like 'tmy' or 'pv_performance'. The description does not guide the agent on selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses rate limits (5 per identifier per day), cost (free, not counted against quota), and impact (team reads digests daily, affects roadmap). With no annotations, the description carries the burden and handles it well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with purpose, followed by usage guidance and behavioral notes. Each sentence adds value, though slightly verbose. Still well-structured and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensively covers the tool's behavior, usage, and constraints. No output schema, but not needed for a feedback submission tool. Context signals indicate moderate complexity, and the description fully addresses it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with descriptions. The description adds context for the 'type' enum and reinforces usage for 'message'. Provides examples of what to include, adding value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to send feedback to Pipeworx team. It lists specific feedback types (bug, feature, data_gap, praise) and distinguishes from sibling tools by focusing on feedback rather than queries or data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use scenarios (bug, feature, data_gap, praise) and instructions on how to describe issues in terms of tools/packs. Does not explicitly state when not to use, but the positive guidance is strong and sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pv_performanceAInspect

Model annual + monthly PV system output. Returns kWh/year estimates and monthly breakdowns.

ParametersJSON Schema
NameRequiredDescriptionDefault
lossNoSystem losses (%) — default 14
angleNoTilt angle in degrees from horizontal. Default 35.
aspectNoAzimuth: 0=south, 90=west, -90=east, 180=north. Default 0.
latitudeYesLatitude in degrees
longitudeYesLongitude in degrees
peakpowerNoNominal peak power (kWp). Default 1.
pvtechchoiceNocrystSi | CIS | CdTe | Unknown. Default crystSi.
mountingplaceNofree | building (BIPV). Default free.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must convey behavior. It states the tool returns estimates and monthly breakdowns, which is useful but lacks details on side effects, permission requirements, or what happens with invalid inputs. It is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, with no redundancy or filler. It efficiently conveys the core purpose and outputs. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters and no output schema, the description is minimal. It explains what the tool does and what it returns but does not detail how parameters influence results or provide examples. It is functional but could be more comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all 8 parameters. The tool description adds no additional parameter meaning beyond what the schema already provides. Thus, it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Model', the resource 'PV system output', and the scope 'annual + monthly'. It returns specific estimates, making the purpose unambiguous. None of the sibling tools appear to overlap with this PV modeling function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or when it is inappropriate to use. The context is purely implied by the task.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations present, so description carries full burden. Discloses scope (per identifier), listing behavior when key omitted, and pairing with other tools. Does not cover error cases or rate limits, but for a simple read operation this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly written sentences, each adding value. No redundancy, front-loaded with action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 optional param, no output schema), the description fully explains input, behavior, scope, and related tools. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters, description adds the 'list all keys' behavior for the omitted key case, which is beyond the schema description. Provides practical usage context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves a saved value or lists all keys, with specific verb-resource relationship. It distinguishes from sibling tools remember and forget by naming them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly describes when to use (look up stored context) and pairs with remember/forget. Lacks explicit 'do not use when' but the positive guidance is clear and context-rich.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must provide behavioral context. It discloses that the tool fans out to three sources in parallel, the format of 'since', and the return structure. However, it omits potential limitations like rate limits or result freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, packing necessary information into a single paragraph without redundancy. It starts with a clear purpose statement and efficiently covers usage, parameters, and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return types (structured changes, count, URIs) and data sources. It could elaborate on the structure of 'changes', but overall it provides sufficient context for an agent to understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers all parameters with descriptions, but the tool adds value by explaining the 'since' format (ISO vs relative shorthand) and giving examples for 'value'. This goes beyond the schema's basic definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to find recent changes for a company. It provides specific example queries and distinguishes itself from siblings like 'entity_profile' by focusing on time-bound updates across multiple sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly suggests when to use the tool with example user queries. It does not discuss when not to use it or compare to sibling tools directly, but the context makes its niche clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key-value scoping, persistence differences for authenticated vs. anonymous users, and 24-hour retention for anonymous sessions, despite no annotations provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with purpose and guidelines, but slightly verbose; could be tightened without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists and description omits what the tool returns (e.g., success confirmation), leaving the agent uninformed about the response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description rephrases key and value roles but adds little beyond examples; does not introduce new constraints or format requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool saves data for later reuse, provides concrete examples (resolved ticker, target address), and distinguishes from sibling tools like recall and forget.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when you discover something worth carrying forward') and pairs with alternatives ('Pair with recall to retrieve later, forget to delete').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the tool returns IDs and citation URIs, and is a read-only lookup. Could mention missing identifier handling, but overall sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with main purpose, no redundant content. Every sentence provides value: purpose, examples, workflow, and benefits.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool with 2 params and no output schema, description covers purpose, input usage, return values, and workflow context. Could add edge cases, but is complete enough for agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description enriches parameter meaning with examples (e.g., 'For company: ticker (AAPL), CIK...') and clarifies usage beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Look up the canonical/official identifier for a company or drug,' specifying verb and resource. Provides examples and distinguishes from siblings by noting it replaces multiple lookup calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when a user mentions a name and you need the CIK...' and 'Use this BEFORE calling other tools that need official identifiers,' offering clear contextual guidance and workflow placement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tmyDInspect

Typical Meteorological Year — hourly synthetic year representative of the climate.

ParametersJSON Schema
NameRequiredDescriptionDefault
endyearNo
latitudeYes
longitudeYes
startyearNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description fails to disclose any behavioral traits—such as whether this is a read operation, side effects, or return format. The agent receives no information about what invoking this tool entails.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is short but wastes space on a textbook definition rather than tool action. It could be more concise and front-loaded with the verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations, output schema, and parameter descriptions, this single-sentence definition is entirely insufficient. The tool has 4 inputs and no return documentation, rendering it nearly unusable without external knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not mention any of the four parameters (latitude, longitude, startyear, endyear). The agent has no clue what values to provide or how they affect results.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description defines 'Typical Meteorological Year' as a concept but does not state what the tool does (no verb). It reads as a definition, not an action, leaving the agent unclear whether the tool generates, retrieves, or computes the data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like monthly_radiation or pv_performance. The description provides no context for appropriate usage or exclusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavioral traits: it returns a verdict with five possible outcomes, includes structured data, actual value with citation, and percent delta. It also notes version limitations (v1). However, it does not mention authentication, rate limits, or side effects, though none seem relevant for a read-only validation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear lead sentence, usage guidance, and return details. It is concise yet informative, though the initial list of synonyms ('fact-check, verify, validate, or confirm/refute') is slightly redundant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with a single required parameter and no output schema, the description provides complete context: what it does, when to use, domain limitations, and what the response contains. It effectively replaces the need for additional documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (claim parameter documented). The description adds value by providing example claims ('Apple's FY2024 revenue was $400 billion') and clarifying acceptable phrasing, which goes beyond the schema's brief description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fact-checking or validating natural-language factual claims. It uses strong action verbs like 'fact-check, verify, validate, confirm/refute' and distinguishes itself from sibling tools by specifying its specialized scope (company-financial claims) and efficiency gains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use-case triggers ('Use when an agent needs to check whether something a user said is true') and scope limitations ('v1 supports company-financial claims... public US companies'). It does not explicitly list alternatives or when not to use, but the scope naturally excludes other claim types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.