Skip to main content
Glama

Server Details

DefiLlama MCP — DeFi analytics from DefiLlama (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-defillama
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 17 of 17 tools scored. Lowest: 2.9/5.

Server CoherenceC
Disambiguation2/5

There is significant overlap among tools, e.g., multiple tools for TVL (defi_chain_tvl, defi_tvl_protocols, defi_protocol_detail) and multiple entity lookup tools (compare_entities, entity_profile, resolve_entity). The presence of both DeFi-specific and general Pipeworx tools blurs the server's focus.

Naming Consistency2/5

Tool names are inconsistent: some use 'defi_' prefix (defi_chain_tvl), some use plain verbs (compare_entities, resolve_entity), and others are generic (ask_pipeworx, discover_tools). No consistent verb_noun or other pattern, making it hard to predict tool names.

Tool Count3/5

At 17 tools, the count is slightly above the ideal range but still manageable. However, the mix of DeFi and general query tools suggests the server could be split into two smaller, more focused servers.

Completeness3/5

The DeFi coverage includes TVL, fees, yields, and stablecoins, but lacks write operations (no create/update/delete). The Pipeworx tools add entity resolution and fact-checking, which extend beyond the apparent DeFi domain, creating a sense of incompleteness for a purely DeFi server.

Available Tools

17 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clearly states it picks the right tool and fills arguments, implying it is a high-level orchestrator. Does not disclose potential failure modes or latency, but sufficient given simplicity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with core purpose in first sentence, examples add value. Could trim slightly, but no wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (one param, no output schema), description adequately explains tool's role. No need for deeper detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and description adds context that the parameter is a natural language request, but adds little beyond schema (which already has 'Your question or request in natural language'). Baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses strong verb phrase 'Ask a question' and clarifies it returns an answer from the best available data source, clearly distinguishing it from sibling tools which are specific data tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to use when you have a natural language question, and that it handles tool selection automatically. Provides examples, but no explicit when-not-to-use or alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. It discloses the specific data returned for each entity type, the source (SEC EDGAR), and that it returns paired data and URIs. It does not mention whether it's read-only or any side effects, but for a comparison tool, this is reasonably transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first sentence states the core purpose, second provides type-specific details and output characteristics. No extraneous words, well front-loaded with the most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of comparing multiple entities with different data points per type, the description covers the main aspects: entity types, data fields, source, and output format (paired data + URIs). It also mentions efficiency gains. Lack of output schema is compensated by description. Could add error handling details, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, and the description adds substantial value beyond schema by explaining the meaning of 'values' for each type (e.g., tickers/CIKs for company, drug names) and providing examples. This enriches the parameter semantics significantly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares 2-5 entities side by side, with specific data for 'company' (revenue, net income, cash, long-term debt from SEC EDGAR) and 'drug' (adverse-event report count, FDA approval count, active trial count). It distinguishes from siblings by mentioning it replaces 8-15 sequential agent calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives clear context for when to use (comparing 2-5 entities of a type) and implies it's for batch operations, but does not explicitly state when not to use or provide alternative tools for single entity queries. The mention of replacing sequential calls provides additional guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

defi_chain_tvlAInspect

Compare total value locked across all blockchains. Returns TVL per chain to identify where capital is concentrated.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
chainsYes
total_chainsYesTotal number of chains tracked
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. It states no parameters needed, implying a simple call. However, it doesn't disclose return format, freshness, or pagination. Adequate for a parameterless tool, but could add context like 'returns a map of chain names to TVL values'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, front-loaded with purpose. Zero wasted words. Perfectly concise for the simplicity of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no output schema, and no annotations, the description is adequate for a simple list call. It's complete enough for the agent to select and invoke. Could optionally mention return format or caching, but not required for this level of complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. The description adds value by confirming no input is needed, which is helpful. Baseline for 0 params is 4, as per guidelines.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns TVL for each blockchain with a specific verb ('Get') and resource ('TVL for each blockchain'). It distinguishes itself from siblings like 'defi_protocol_detail' or 'defi_tvl_protocols' by targeting chains, not protocols.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly says it's for chain-level TVL, not protocol-specific data. No explicit when-not or alternatives, but the sibling context and 'no parameters' hint at simplicity. Lacks explicit exclusion of protocol-level queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

defi_protocol_detailAInspect

Get detailed metrics for a specific DeFi protocol including TVL history over time, breakdown by blockchain, and token information.

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolYesProtocol slug (e.g., "aave", "uniswap", "lido", "makerdao")

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoOfficial protocol URL
mcapNoMarket capitalization in USD
nameYesProtocol name
slugYesProtocol slug identifier
chainNoPrimary blockchain
chainsYesList of supported chains
symbolNoToken symbol
categoryNoProtocol category
chain_tvlsNoTVL breakdown by blockchain
current_tvlNoCurrent total value locked in USD
descriptionNoProtocol description
tvl_history_last_30YesLast 30 days of TVL history
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It states it returns TVL history, chain breakdowns, and token info, which implies a read-only operation. However, it does not disclose potential rate limits, data freshness, or any side effects. Given no annotations, a score of 3 is appropriate as the description adds some behavioral context but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose and what it returns. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has only one parameter, no output schema, and no annotations. The description covers the return value ('TVL history, chain breakdowns, and token info') which is adequate for a simple retrieval tool. It could mention that the result is a JSON object, but the description is mostly complete given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the single parameter 'protocol' is described with examples). The description does not add much beyond the schema for this parameter, but the schema already provides sufficient semantics. Since coverage is high, baseline is 3; the description's mention of 'specific DeFi protocol' reinforces the parameter's purpose, earning a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'get' and clearly identifies the resource ('detailed TVL history, chain breakdowns, and token info for a specific DeFi protocol'). It distinguishes itself from siblings like defi_tvl_protocols (which lists protocols) and defi_chain_tvl (which focuses on chains) by emphasizing per-protocol details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use: when you need detailed info about a single protocol. It does not explicitly state when not to use it or mention alternatives, but the sibling list provides context. A brief exclusion would improve clarity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

defi_protocol_feesCInspect

Check daily fees and revenue for a DeFi protocol. Returns fee trends to analyze profitability and income changes over time.

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolYesProtocol slug (e.g., "aave", "uniswap", "lido")
data_typeNoType of data: "dailyFees" or "dailyRevenue" (default: "dailyFees")

Output Schema

ParametersJSON Schema
NameRequiredDescription
nameYesProtocol name
protocolYesProtocol slug
total_7dNoTotal fees/revenue last 7 days in USD
data_typeYesType of fee data returned
total_24hNoTotal fees/revenue in last 24 hours in USD
total_30dNoTotal fees/revenue last 30 days in USD
total_all_timeNoTotal fees/revenue all time in USD
total_48h_to_24hNoTotal fees/revenue 48h-24h ago in USD
daily_chart_last_30YesLast 30 days of daily data
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions 'returns daily fee/revenue figures' but does not disclose any behavioral traits like rate limits, data freshness, or whether historical data is available. The tool name implies a focus on fees, but the description is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is short (two sentences) and front-loaded with the key action. However, it could be more concise by avoiding redundancy (e.g., 'fee or revenue data' and 'daily fee/revenue figures' say similar things).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 2 parameters with full schema coverage but no output schema, the description should clarify the return format or data structure. It does not, leaving the agent unsure what the output looks like. The description is complete enough for a simple tool but lacks behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema; it simply restates 'fee or revenue data' and 'daily fee/revenue figures'. The schema already explains protocol slug and data_type values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets fee/revenue data for a DeFi protocol and specifies daily figures. This distinguishes it from siblings like defi_tvl_protocols (TVL) and defi_yields (yields). However, it does not explicitly differentiate from potential fee-related siblings not present.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus others like defi_protocol_detail or defi_tvl_protocols. The description lacks context on prerequisites or scenarios, leaving the agent to infer from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

defi_stablecoinsAInspect

Get stablecoin market cap and distribution across blockchains. Returns supply per chain to track adoption and network preferences.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
showingYesNumber of stablecoins returned (max 30)
stablecoinsYes
total_stablecoinsYesTotal stablecoins tracked
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It accurately indicates this is a read-only operation with no parameters, providing sufficient transparency for a simple data retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (two sentences) and front-loads the purpose. Every sentence is necessary and adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, no annotations, and no output schema, the description adequately explains what the tool does. It might benefit from mentioning the data format or time range, but it's complete for a simple query.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and the description confirms this ('No parameters needed'). Since there are no parameters to explain, the description adds value by explicitly stating the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves stablecoin market data with specific categories (market cap and chain distribution), which distinguishes it from sibling tools like defi_chain_tvl or defi_protocol_detail.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool vs alternatives, but the 'no parameters needed' note implies it's for a broad overview. No guidance on exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

defi_tvl_protocolsAInspect

Search DeFi protocols by name or category to compare their total value locked. Returns protocol name, TVL amount, blockchain, and category.

ParametersJSON Schema
NameRequiredDescriptionDefault
searchNoFilter protocols by name (case-insensitive substring match)
categoryNoFilter by category (e.g., "DEXes", "Lending", "Bridge", "CDP", "Yield")

Output Schema

ParametersJSON Schema
NameRequiredDescription
showingYesNumber of protocols returned (max 50)
protocolsYes
total_matchedYesTotal number of protocols matching filters
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It indicates the tool is a read operation (retrieves TVL) and returns specified fields. However, it does not disclose pagination, rate limits, or data freshness (e.g., whether TVL is real-time or historical). This is acceptable for a simple data retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. It first states the purpose and then lists output fields. The information is front-loaded and sufficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 optional parameters, no output schema), the description is complete enough for an agent to use it correctly. It specifies inputs, filtering behavior, and output fields. No additional details (e.g., return format) are needed since the tool likely returns a standard list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters (search and category). The description adds context by stating that search is case-insensitive substring match and gives example categories, but this is already implied by the schema. Thus, it adds minimal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves TVL for all DeFi protocols, with optional filters by name or category. It lists the output fields (protocol name, TVL, chain, category) and distinguishes from siblings like defi_chain_tvl (chain-level TVL) and defi_protocol_detail (single protocol details).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains optional filtering parameters but does not explicitly state when to use this tool vs alternatives. However, the sibling context and tool name make it clear this is for broad protocol TVL queries, while siblings like defi_protocol_fees or defi_yields are for other metrics.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

defi_yieldsBInspect

Find yield opportunities across DeFi pools filtered by project name, minimum TVL, or minimum APY. Returns APY, TVL, and project details.

ParametersJSON Schema
NameRequiredDescriptionDefault
searchNoFilter by project name (case-insensitive substring match)
min_apyNoMinimum APY percentage (e.g., 5 for 5%)
min_tvlNoMinimum TVL in USD (e.g., 1000000 for $1M)

Output Schema

ParametersJSON Schema
NameRequiredDescription
poolsYes
showingYesNumber of pools returned (max 50)
total_matchedYesTotal pools matching filters
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It mentions that results include APY, TVL, and project info, which gives some idea of the output. However, it does not mention whether the tool is read-only, rate limits, pagination, or any side effects. The description is neutral but lacks depth for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences and efficiently conveys purpose and optional filters. It is front-loaded with the core function. One point off because the second sentence could be more concise or integrated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should explain return values, but it only vaguely mentions 'APY, TVL, and project info.' With a moderate complexity of three optional filters and no required parameters, the description is incomplete for an agent to fully understand what data to expect or how to use the results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters. The description adds a brief mention of filtering by project name, minimum TVL, or minimum APY, which aligns with the schema but adds no new meaning. The baseline is 3, and no extra value is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool retrieves DeFi yield pool opportunities with APY, TVL, and project info. It is specific about the resource (DeFi yield pools) and the verb (Get). However, it does not distinguish itself from sibling tools like defi_chain_tvl or defi_protocol_detail, which also deal with DeFi data, so it loses one point.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus the other DeFi sibling tools. It only mentions optional filters but no when-to-use or when-not-to-use context. With five DeFi-related siblings, this lack of guidance is a significant gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description implies search functionality but does not detail behavior like pagination, sorting, or whether it handles partial matches. However, the input schema clarifies the query and limit parameters. No annotations provided, so the description carries the burden; it adds the 'returns most relevant' but lacks specifics on ranking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Front-loaded with key action and resource, then immediate call to action. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could mention return format or limitations. However, the description and schema together cover the basics: natural language query, limit. Slight gap in not describing the structure of returned results (e.g., relevance order).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents both parameters. The description does not add extra meaning beyond what's in the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'search' and resource 'tool catalog', clearly stating it returns tool names and descriptions. Distinguishes from siblings like ask_pipeworx, defi tools, and memory tools by being a discovery tool for the catalog itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This is a clear directive on when to use the tool, though no explicit when-not or alternative names given, the context strongly implies its primary role.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist. Description mentions returns citation URIs and replaces many calls, but does not explicitly state the tool is read-only, safe, or idempotent. Missing disclosure of potential side effects or authorization needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first defines purpose and contents, second provides URI info and alternative use cases. No redundant words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description fully explains return values (citation URIs) and covers critical edge cases (federal contracts and name resolution). Sufficient for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds context beyond parameter descriptions by detailing what data the profile includes (SEC, XBRL, patents, etc.), enhancing understanding of how parameters map to results.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a 'full profile of an entity across every relevant Pipeworx pack in one call,' specifics for 'company' type (SEC filings, XBRL, patents, news, LEI), and mentions it replaces 10-15 sequential calls. This distinguishes it from sibling tools like resolve_entity and compare_entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (full profile) and when not ('For federal contracts call usa_recipient_profile directly'). Also advises using resolve_entity if only a name is available, providing clear decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden of behavioral disclosure. It states the delete operation but does not mention whether the deletion is permanent, reversible, or requires confirmation. There is no info on error conditions (e.g., key not found) or return value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that efficiently conveys the core purpose. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema, no nested objects), the description is adequate but lacks behavioral details such as idempotency, error handling, or confirmation. The lack of annotations further reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema provides 100% coverage with a description for the 'key' parameter. The tool description adds little beyond the schema, but the schema itself is sufficient. With only one parameter and full coverage, a score of 4 is justified.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (delete), the resource (stored memory), and the identifier (key). It distinguishes itself from sibling tools like 'recall' and 'remember', which likely perform read and write operations respectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying 'by key', but does not provide explicit guidance on when to use this tool versus alternatives like 'recall' or 'remember'. No context on prerequisites or side effects is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses rate limiting and that it is 'Free', but omits behavioral details such as whether the operation is synchronous or any side effects beyond sending. For a simple feedback tool this is acceptable but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each serving a purpose: stating the action/use cases, providing a content guideline, and noting rate limit/cost. No redundant or vague text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (3 parameters, no output schema) and the richness of the schema descriptions, the description covers all essential aspects: purpose, usage, rate limit, content guidelines, and cost. No gaps remain for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters have schema descriptions, and the tool description adds extra guidance on message content ('Describe what you tried...') beyond the schema. This helps the agent craft appropriate input, though the schema already provides good enum descriptions for 'type'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Send') and clearly identifies the resource ('feedback'). It also lists concrete use cases (bug reports, feature requests, etc.), making it easy to distinguish from sibling tools like 'ask_pipeworx'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool and includes a specific instruction (do not include the end-user's prompt verbatim) and a rate limit. However, it does not explicitly mention when not to use it or suggest alternative tools for related actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clearly states it retrieves stored memories and that omitting key lists all. Discloses scope: saved earlier in session or previous sessions. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action, no wasted words. First sentence covers core purpose and variant behavior, second adds usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool (1 optional param, no output schema), description is complete enough. Explains key behavior and scope. Could optionally mention return format or persistence duration, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline 3. Description adds meaning by explaining 'key' is the memory key to retrieve, and omitting lists all keys. Also provides context that memories are saved from earlier sessions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies verb 'retrieve' and resource 'memory' by key, also explains listing all memories if key omitted. Clearly distinguishes from sibling 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: to retrieve previously stored context. Also explains variant behavior (omit key to list all). However, does not mention when not to use or compare with alternatives beyond siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the parallel fan-out to multiple data sources, accepted input formats, and return structure (structured changes, total_changes, URIs). This adequately characterizes the tool's behavior for selection and invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (5 sentences), well-structured, and front-loaded with the core purpose. Every sentence adds value—no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters, no output schema, and no annotations, the description covers purpose, inputs, outputs, and use cases. It lacks error handling or pagination details, but is sufficiently complete for a query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers 100% of parameters, but the description adds practical guidance: 'since' accepts ISO/relative and suggests '30d'/'1m'; 'value' confirms ticker/CIK; 'type' restricts to company. This extra context helps the agent choose correct values beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('what's new') and resource ('entity'), and differentiates itself by specifying the time-windowed change-monitoring use case, which distinguishes it from sibling tools like entity_profile (static) and compare_entities (comparison).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly recommends using this tool for 'brief me on what happened with X' or change-monitoring workflows, providing clear use context. However, it does not mention when not to use it or alternatives for other entity types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses behavioral traits: it stores key-value pairs, distinguishes between authenticated (persistent) and anonymous (24-hour) sessions, and implies memory duration. This is sufficient for the tool's simplicity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences, each adding value: first states the core function, second gives use cases, third explains persistence behavior. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 simple params, no output schema, no annotations), the description is complete. It covers purpose, usage, and behavioral nuances. Could be improved by mentioning what happens on overwrite, but not strictly necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds a bit of context about what values can be stored (findings, addresses, etc.), but does not add meaning beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Store a key-value pair in your session memory', which is a specific verb+resource combination. It also distinguishes itself from sibling tools like 'recall' and 'forget' by focusing on storage, not retrieval or deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use this tool ('save intermediate findings, user preferences, or context across tool calls') and provides context about persistence for authenticated vs anonymous users, though it does not explicitly mention when not to use it or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden for behavioral disclosure. It explains input formats, return values, and versioning (v1 only supports company). However, it does not state whether the operation is read-only or if there are side effects, authentication needs, or error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with core purpose and scope. Every sentence serves a purpose: first defines the tool, second provides version details and output summary. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (2 params, no output schema), the description is sufficiently complete. It covers input format, version constraints, and output fields. Missing aspects like error handling or authentication are minor in this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, baseline 3. The description adds value beyond the schema by providing examples (AAPL, 0000320193, Apple) and explicitly listing accepted formats for the value parameter, enhancing understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool resolves entities to canonical IDs across Pipeworx data sources, specifying the verb (resolve), resource (entity), and scope. It provides a concrete example for type=company, distinguishes from siblings by focusing on entity resolution, and notes it replaces multiple lookup calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use by stating it replaces 2-3 lookup calls, consolidating lookups. However, it does not explicitly mention alternatives or when not to use, nor compare with sibling tools. The context is clear but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full behavioral disclosure burden. It explains the tool returns a verdict, extracted structured form, actual value with citation, and percent delta. It also mentions the underlying data sources (SEC EDGAR + XBRL). Missing details on auth, rate limits, or error handling, but overall transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, both front-loaded with key information. The first sentence states the core purpose, the second adds details on scope, output, and benefits. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter, no output schema, and no annotations, the description covers the essential aspects: purpose, supported domain, output format, and efficiency. It could mention unsupported claim types explicitly, but current detail is sufficient for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'claim' parameter, and the description adds examples and clarifies the natural-language format (e.g. 'Apple's FY2024 revenue was $400 billion'). This provides context beyond the schema property description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fact-check a natural-language claim against authoritative sources, with a specific scope of company-financial claims for US public companies. It differentiates from siblings like 'compare_entities' and 'entity_profile' by focusing on verification with a structured verdict.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use the tool (for company-financial claims) and notes that it replaces 4-6 sequential agent calls, implying efficiency benefits. However, it does not explicitly state when not to use it or provide alternatives for other domains.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.