Skip to main content
Glama
Ownership verified

Server Details

AI-to-AI petrol station. 56 pay-per-call endpoints covering market signals, crypto/DeFi, geopolitics, earnings, insider trades, SEC filings, sanctions screening, ArXiv research, whale tracking, and more. Micropayments in USDC on Base Mainnet via x402 protocol.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down
Tool DescriptionsB

Average 3.4/5 across 56 of 56 tools scored. Lowest: 2.5/5.

Server CoherenceB
Disambiguation2/5

Multiple tools have unclear boundaries, particularly among company intelligence (getCompanyProfile, getB2bIntel, getCompetitorIntel) and crypto data tools (getOnchainData, getDefiYields, getCryptoDerivatives). The runBundle* tools significantly overlap with their constituent atomic tools, causing confusion about whether to use granular or aggregated endpoints.

Naming Consistency4/5

Tools follow a consistent camelCase verbNoun pattern throughout (getX, checkX, runBundleX, screenX). While the convention is predictable, the excessive use of the 'get' prefix for 40+ tools without hierarchical namespacing makes navigation monotonous.

Tool Count1/5

With 56 tools, this represents an extreme mismatch per the rubric (50+ tools). Many endpoints should be consolidated with parameters (e.g., getAiTokens, getBittensor, and getVirtualsProtocol as one tool with a sector parameter) rather than separate endpoints, indicating poor API design.

Completeness5/5

Extremely comprehensive coverage spanning AI (compliance, patents, research), traditional finance (SEC filings, insider trades, analyst ratings, macro data), crypto (on-chain, DeFi, derivatives, NFTs), and geopolitical intelligence. No obvious domain gaps exist despite the excessive granularity.

Available Tools

56 tools
checkAiComplianceA
Read-onlyIdempotent
Inspect

EU AI Act 2024/1689 risk classification (prohibited/high-risk/limited/minimal) with legal obligations, deadlines, and CISA alerts. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
systemNoAI system name or type being assessed
companyNoCompany name to classify under EU AI Act
descriptionNoBrief description of the AI system's purpose and capabilities
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnly/idempotent hints, the description adds valuable behavioral context: the $0.005 cost disclosure, the inclusion of CISA alerts alongside EU regulatory data (indicating multi-jurisdictional coverage), and the specific output components (legal obligations, deadlines). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely dense single-sentence construction front-loads the regulatory scope (EU AI Act 2024/1689) immediately. Every element earns its place: legal citation, classification levels, output components (obligations/deadlines/alerts), and pricing. Zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description enumerates return value categories (risk classification, legal obligations, deadlines, CISA alerts). As a read-only lookup tool with clear annotations, it adequately covers behavioral expectations, though it could mention error handling for non-existent companies.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (system, company, description all documented), the schema carries the parameter semantics. The description focuses on behavior/output rather than inputs, which is appropriate when the schema is self-explanatory. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'EU AI Act 2024/1689 risk classification' with specific categories (prohibited/high-risk/limited/minimal), distinguishing it clearly from sibling tools like getAiNews or getAiPatents which handle different AI-related data domains. The specific legal citation and classification levels provide precise scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description establishes clear domain context (EU AI Act legal compliance) that implicitly distinguishes it from market intelligence siblings, but lacks explicit when/when-not guidance or named alternatives. An agent can infer this is for regulatory compliance checking versus news or patent searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getAiNewsA
Read-onlyIdempotent
Inspect

Real-time AI/tech/market news from HackerNews, Reddit (r/ML, r/LocalLLaMA), NewsAPI. Scored articles and trending keywords. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoHow many hours back to fetch news (1-168)
limitNoMaximum number of articles to return (1-100)
categoryNoNews category filter: ai, crypto, macro, or allai
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent disclosure beyond annotations: specifies exact subreddits and APIs used as sources, notes that articles are 'scored' and 'trending keywords' are included (hinting at return structure), and critically discloses the $0.005 cost. This adds significant operational context not present in the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three information-dense sentences with zero waste: sources front-loaded, value-add features (scoring, keywords) in middle, cost at end. Every element earns its place; no redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the read-only nature (covered by annotations) and simple parameter set, the description adequately covers data provenance and return content (scored articles, keywords). Minor gap: lacks description of return format structure, though no output schema exists to validate against.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured schema fully documents parameters (hours, limit, category). The description text adds no parameter-specific semantics, but baseline 3 is appropriate given the schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (AI/tech/market news) and specific sources (HackerNews, Reddit r/ML, r/LocalLLaMA, NewsAPI), distinguishing it from sibling research tools like getArxivResearch. However, it lacks an explicit action verb (starts with 'Real-time' rather than 'Fetch' or 'Retrieve'), slightly diminishing clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus alternatives (e.g., getArxivResearch for academic papers, getAiPatents for IP data). While the data sources imply social/community news coverage, the description fails to state exclusions or prerequisites for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getAiPatentsA
Read-onlyIdempotent
Inspect

USPTO AI patent filings — who is building what in neural networks, autonomous agents, LLMs. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to search for new patent applications and grants (1-365)
queryNoPatent search keywords covering the technical domain (e.g. 'autonomous agents LLM')artificial intelligence agentic
companiesNoComma-separated assignee company names to filter patents for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical cost information ('$5') not present in annotations or schema, which directly impacts invocation decisions. Also clarifies specific technical scope (neural networks, LLMs) beyond the generic 'AI' in the tool name. Does not contradict readOnlyHint=true or other annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient single-sentence structure. Front-loaded with resource identification ('USPTO AI patent filings'), followed by value proposition ('who is building what'), specific domains, and cost. Zero redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 3-parameter search tool with full schema coverage and safety annotations. The cost disclosure compensates partially for missing output schema documentation, though return value format remains undisclosed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description hints at query content ('neural networks, autonomous agents, LLMs') matching the default parameter value but does not explicitly explain parameter syntax, format constraints, or interaction between days/query/companies fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb+resource combination ('USPTO AI patent filings') clearly identifies the data source and content. Scope is precisely delineated with technology domains ('neural networks, autonomous agents, LLMs') that distinguish it from sibling tools like getAiNews or getAiTokens.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to select this tool versus alternatives like getAiNews or checkAiCompliance. While the domain is implied by mentioning patents, there are no when-to-use conditions, prerequisites, or exclusion criteria stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getAiTokensB
Read-onlyIdempotent
Inspect

AI/ML crypto tokens sector performance — NEAR, FET, AGIX, RNDR, WLD, TAO. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of AI-sector tokens to return, ranked by market cap (1-100)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, establishing safe read-only behavior. Description adds specific token examples indicating coverage scope, but fails to disclose what performance metrics are returned (price, volume, market cap) despite having no output schema. Does not mention rate limits or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at one sentence plus fragment. Front-loaded with resource and action. However, the '$0.005' fragment appears to be noise or an editing artifact that doesn't earn its place, slightly degrading clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Critical gap: no output schema exists, yet description fails to explain return format, data structure, or what 'performance' entails (e.g., 24h change, market cap rankings, volume). Given simple single-parameter input, description should compensate for missing output schema but doesn't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'limit' parameter fully documented (1-100, ranked by market cap). Description mentions no parameters, but baseline 3 is appropriate when schema carries full documentation load. No additional semantic context provided beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States clear resource (AI/ML crypto tokens) and action (sector performance), and distinguishes from siblings like getAiNews or checkAiCompliance by specifying 'sector performance.' Lists concrete token examples (NEAR, FET, AGIX, etc.). Docked one point for the cryptic '$0.005' which lacks context and confuses the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to use this versus sibling AI tools like getBittensor (specific protocol data), getAiNews (news), or getModelPrices (inference pricing). No prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getAnalystRatingsA
Read-onlyIdempotent
Inspect

Upgrades/downgrades on AI/tech stocks — firm name, rating change, price target. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to include analyst rating changes (1-90)
symbolNoStock ticker to filter analyst ratings for (e.g. NVDA); empty returns market-wide
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent disclosure of cost ('$0.005') which is critical for LLM agent decision-making and not captured in annotations. It also describes the return data structure (firm name, rating change, price target) compensating for the missing output schema. Annotations already cover safety profile (readOnly/idempotent), so the cost and return structure add significant value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely dense information architecture in a single fragment: domain (AI/tech), data type (upgrades/downgrades), return fields (firm name, rating change, price target), and cost ($0.005). Every element earns its place with zero waste, efficiently front-loading the most critical selection criteria.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description appropriately documents the return structure (firm name, rating change, price target) and discloses cost. For a 2-parameter read-only tool, this is sufficiently complete, though it could optionally mention data freshness or source reliability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents both 'days' and 'symbol' parameters. The description implies the temporal nature of the data ('upgrades/downgrades') and domain ('AI/tech stocks') which loosely maps to parameters, but does not add specific syntax guidance, format examples, or constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (analyst upgrades/downgrades) and domain scope (AI/tech stocks), with specific return fields (firm name, rating change, price target). It distinguishes from siblings like getCompanyProfile or getEarnings by focusing specifically on analyst rating changes, though it could explicitly state this distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'AI/tech stocks' qualifier provides implied domain-specific usage guidance, suggesting when to select this tool over general market data tools. However, it lacks explicit when-to-use guidance, prerequisites (like requiring valid tickers), or contrasts with related tools like getCompanyProfile that might overlap in coverage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getArxivResearchB
Read-onlyIdempotent
Inspect

Latest AI/ML papers from ArXiv. Breakthrough detection, trending topics, top authors. Covers cs.AI, cs.LG, cs.CL, cs.CV. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to search for new papers (1-30)
limitNoMaximum number of papers to return (1-100)
queryNoFree-text keyword search within paper titles and abstracts
categoryNoArXiv subject category to filter byall
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only safety (readOnlyHint=true, destructiveHint=false). Description adds valuable functional context: 'Breakthrough detection, trending topics, top authors' explains the analysis capabilities beyond simple retrieval, and '$0.005' discloses cost. Does not mention rate limits or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with information front-loaded. Each fragment conveys distinct value (source, capabilities, categories, cost). The pricing '$0.005' is marginally extraneous for agent selection but doesn't significantly detract.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema, and while description hints at return content (papers, breakthrough detection, trending topics), it doesn't specify the data structure or format. Adequate for a simple read-only tool but could clarify return values given the missing output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage (days, limit, query, category all documented). Description mentions no parameters, earning the baseline score of 3 since the schema carries the full semantic burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific resource (ArXiv AI/ML papers) and distinguishes from siblings like getAiNews and getAiPatents by specifying academic source and categories (cs.AI, cs.LG, cs.CL, cs.CV). Loses one point for implying rather than explicitly stating the retrieval action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to select this tool over alternatives like getAiNews (industry news) or getGithubTrending (code repositories). While the domain is implied by 'papers,' there are no when-to-use or when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getB2bIntelA
Read-onlyIdempotent
Inspect

Golden Lead packets for B2B sales agents. SEC + GitHub + jobs → scored leads (HOT/WARM/COLD) with AI pivot signals. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
companiesNoList of company names or domains to generate B2B intelligence for (1-10 companies)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent). Description adds critical behavioral context missing from annotations: explicit cost ('$5'), data provenance (SEC + GitHub + jobs), and output structure (scored leads with AI pivot signals). No contradictions with readOnlyHint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: three fragments convey audience, data pipeline, output format, and pricing. Arrow notation '→' effectively visualizes transformation. No redundant words; critical $5 warning placed at end for visibility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by describing return values (scored categories, AI signals). Addresses cost transparency. Could mention parameter optionality or default values, though these are discoverable in schema. Adequate for a single-parameter aggregation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('List of company names or domains...'), establishing baseline 3. Description adds no explicit parameter guidance (e.g., does not mention defaults or optional nature), but schema is self-sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear resource (B2B lead packets) and data sources (SEC, GitHub, jobs), with specific output format (HOT/WARM/COLD scoring). Distinguishes from single-source siblings like getSecFilings or getJobPivots by emphasizing aggregation. Lacks an explicit action verb (implies 'provides/returns' via sentence structure).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Identifies target audience ('B2B sales agents') but provides no guidance on when to use this aggregated tool versus specific alternatives like getCompanyProfile, getJobPivots, or getSecFilings. No prerequisites or exclusions stated despite the $5 cost implication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getBittensorB
Read-onlyIdempotent
Inspect

Bittensor TAO subnet activity — validator rewards, top subnets, network health. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of top subnets to return ranked by validator rewards (1-64)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive behavior. The description adds the '$0.005' cost information (valuable behavioral context not in structured fields) and specifies the data scope (validator rewards, network health). It doesn't contradict annotations, though it omits rate limits or response format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at one sentence plus pricing notation. Information is front-loaded with the domain identifier. However, the '$0.005' fragment is telegraphic (unclear if cost, token price, or metric) and could benefit from 'Cost: $0.005' clarity. No wasted words otherwise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema (1 optional parameter) and absence of output schema, the description adequately covers what data is returned (validator rewards, top subnets, network health). It meets the minimum viable threshold for a data retrieval tool, though mentioning the default limit behavior (20) or return format would strengthen it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single 'limit' parameter ('Number of top subnets to return ranked by validator rewards'). The description text doesn't mention parameters, but with complete schema coverage, no additional compensation is required. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the domain (Bittensor TAO) and specific data returned (validator rewards, top subnets, network health). It uses specific nouns and implies the 'get' action from the tool name. However, it doesn't explicitly differentiate from sibling crypto data tools like getCryptoDerivatives or getOnchainData, relying on the user to recognize the Bittensor-specific terminology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as getOnchainData or getDaoGovernance. The '$0.005' notation hints at cost but doesn't explain pricing implications or prerequisites. No mention of when not to use or required context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getCommoditiesB
Read-onlyIdempotent
Inspect

Gold, silver, oil, wheat, corn, copper, natural gas — spot prices, trends, supply/demand signals. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
commoditiesNoComma-separated commodity names to fetch (gold, silver, oil, wheat, corn, copper, natgas)gold,silver,oil,wheat,copper
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds valuable context not in annotations: the specific data types returned (spot prices, trends, supply/demand) and pricing ($0.005). However, it omits output format, rate limits, or data freshness details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact with zero filler. The enumeration of commodities immediately establishes scope, and the pricing disclosure ($0.005) is high-value. Deducting one point for the telegraphic style (lacking a subject/verb construction) which slightly impedes scanning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool, the description adequately covers the domain. It compensates somewhat for the missing output schema by listing return data categories (spot prices, trends). However, it should describe the output structure (object format, array, etc.) to be complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting the 'commodities' parameter as comma-separated names with valid examples. The description does not mention the parameter, but with complete schema coverage, the baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (commodities: gold, silver, oil, etc.) and the data returned (spot prices, trends, supply/demand signals). However, it lacks an explicit action verb ('Retrieves', 'Returns') and does not differentiate from the sibling tool getEnergyPrices, which potentially overlaps on oil/natural gas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus alternatives like getEnergyPrices or getMacroData. No prerequisites, filtering guidance, or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getCompanyProfileA
Read-onlyIdempotent
Inspect

Full company dossier: SEC filings + GitHub velocity + hiring + patents + HN sentiment → HOT/WARM/COLD lead score. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoLookback window in days for all data sources in the dossier (1-180)
githubNoGitHub organisation slug for the company (e.g. openai); leave empty to auto-detect
companyYesCompany name or domain to generate the full intelligence dossier for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context not in annotations: the $5 cost, the specific external data sources queried (SEC, GitHub, HN), and the scoring methodology (HOT/WARM/COLD classification). Annotations already cover read-only safety (readOnlyHint=true), so the description appropriately focuses on data scope and pricing rather than restating safety properties.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely information-dense and front-loaded. Uses efficient notation (colons, plus signs, arrows) to convey data sources and transformation logic in a single sentence. The cost ($5) is appended without clutter. Zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description indicates the lead score output format, it lacks detail on the dossier structure given that no output schema exists. For a tool aggregating 5+ complex data sources, additional context on return value structure or pagination would improve completeness, though the scoring mention provides minimal adequate coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters (company, days, github). The description implies the 'company' parameter through context but does not add syntax details or usage examples beyond what the schema already provides, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates a 'Full company dossier' aggregating specific data sources (SEC filings, GitHub velocity, hiring, patents, HN sentiment) and produces a specific output format (HOT/WARM/COLD lead score). This distinguishes it from siblings like getSecFilings or getGithubVelocity which provide individual metrics rather than comprehensive lead scoring.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like getB2bIntel or getCompetitorIntel. It does not indicate prerequisites, cost-benefit tradeoffs (beyond the $5 price), or scenarios where specific sibling tools might be preferred over this comprehensive aggregation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getCompetitorIntelA
Read-onlyIdempotent
Inspect

Competitive intelligence dossier — product launches, pricing changes, hiring signals vs target company. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesPrimary company to build competitive intelligence around
competitorsNoList of competitor company names to compare against (leave empty to auto-identify)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover safety (readOnlyHint, destructiveHint) and idempotency. The description adds critical behavioral context: the cost ($5) and the specific intelligence types returned (product launches, pricing, hiring signals). It does not contradict annotations and enriches the business logic understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact with zero waste. The single sentence front-loads the deliverable type ('dossier'), lists specific content categories separated by em-dashes, indicates the comparative scope ('vs'), and ends with cost—every element earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter schema and rich annotations, the description adequately covers the tool's value proposition and cost. Minor gap: it doesn't clarify the output format (structured JSON vs. text report) despite no output schema existing, though 'dossier' implies a document format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline score applies. The description implies the 'target company' and competitor comparison conceptually but does not add parameter-specific details (syntax, format examples, or auto-identify behavior) beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool generates a 'competitive intelligence dossier' covering specific signals (product launches, pricing changes, hiring). It implicitly distinguishes from siblings like getCompanyProfile or getB2bIntel by emphasizing comparative analysis ('vs target company') and specific competitive signals, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides cost transparency ('$5') which is valuable usage context, but lacks explicit guidance on when to use this versus similar intelligence tools like getB2bIntel or runBundleCompanyDeep. No prerequisites or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getCryptoDerivativesB
Read-onlyIdempotent
Inspect

Crypto futures + options — funding rates, open interest, liquidations, basis. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoCrypto asset ticker to query derivatives data for (e.g. BTC, ETH, SOL)BTC
exchangeNoExchange to filter by (all, binance, bybit, okx, deribit)all
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds the '$0.005' notation, likely indicating cost per call, which is valuable behavioral context not present in annotations. However, it lacks details on rate limits, caching behavior, or data freshness. Annotations already cover the read-only, idempotent safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at one sentence. Information is front-loaded with resource type followed by specific metrics. The '$0.005' suffix is cryptic without context (unclear if cost or data value), slightly diminishing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lists the categories of returned data (funding rates, basis, etc.), partially compensating for the missing output schema. Combined with comprehensive annotations and a simple 2-parameter schema, this is adequate but minimal.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameters are fully documented in the schema itself. The description makes no mention of parameters, so it neither adds nor detracts from the schema documentation, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the resource (crypto futures/options) and specific data points returned (funding rates, open interest, liquidations, basis). However, it does not explicitly distinguish from sibling tools like getOptionsFlow or getOnchainData.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives (e.g., getOptionsFlow for options flow data, getOnchainData for general blockchain metrics). No prerequisites or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getDaoGovernanceC
Read-onlyIdempotent
Inspect

DAO governance activity — active proposals, voting power distribution, treasury size, sentiment. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoProposal status filter: active, passed, failed, or allactive
protocolNoProtocol name to filter DAO governance data for (e.g. uniswap, aave, compound); empty returns all
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly/idempotent), so the description's burden is lighter. It adds value by listing the four specific data categories returned (compensating for missing output schema), but fails to explain the '$5' (likely cost), rate limits, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief and front-loaded with the topic, the trailing '$5' wastes space without explanation (unclear if cost, version, or limit). Every sentence should earn its place; this cryptic token forces the agent to guess its meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description minimally compensates by listing return data categories (proposals, voting power, treasury, sentiment), but lacks structural details, format specifications, or clarification of the '$5' indicator.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with clear examples (uniswap, aave, compound) and enum values. The description mentions 'active proposals' which loosely maps to the status parameter, but adds no semantic detail beyond the schema itself, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies the resource (DAO governance) and specific data returned (proposals, voting power, treasury, sentiment), but the '$5' suffix is cryptic and potentially confusing. It also fails to distinguish from siblings like getOnchainData that might overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus alternatives (e.g., getOnchainData for general protocol data), nor any prerequisites or filtering recommendations beyond the raw parameter definitions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getDefiYieldsB
Read-onlyIdempotent
Inspect

DeFi yield opportunities across Aave, Compound, Curve, Yearn and 100+ protocols. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain to filter yield opportunities by (all, ethereum, arbitrum, polygon, base)all
minApyNoMinimum annual percentage yield (APY) threshold to include opportunities
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, establishing safety. The description adds valuable behavioral context not in annotations: the specific protocols covered and the cost per invocation ($0.005). However, it lacks details on data freshness, rate limits, or aggregation methodology.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise with two high-value fragments: the first establishes scope and differentiation via specific protocol names, the second states cost. Zero wasted words, perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter schema (both optional), rich annotations, and no output schema, the description adequately covers the essentials (purpose, scope, cost). However, gaps remain regarding return value structure, data sources, or filtering behavior when no matches exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters (chain and minApy) including default values. The description adds no specific parameter guidance beyond what's in the schema, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (DeFi yield opportunities) and scope (Aave, Compound, Curve, Yearn and 100+ protocols), distinguishing it from siblings like getCryptoDerivatives or getOnchainData. However, it uses a noun phrase rather than an explicit verb ('Retrieve'), relying on the tool name 'getDefiYields' to imply the action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description includes the cost ('$0.005'), it provides no guidance on when to use this tool versus sibling alternatives like getStablecoins, getOnchainData, or getCryptoDerivatives. There are no prerequisites, filtering recommendations, or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEarningsA
Read-onlyIdempotent
Inspect

Upcoming and recent earnings: EPS, revenue, beat/miss signals for US equities. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoWindow in days around today to search for earnings events (1-90)
symbolsNoComma-separated stock tickers to filter (e.g. AAPL,MSFT); leave empty for all
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description adds value by disclosing the specific data fields returned (EPS, revenue, beat/miss) and the cost ('$0.005'), which is critical behavioral context for an openWorldHint tool accessing external paid data. It does not address rate limits or error states.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two information-dense fragments. It is front-loaded with the core functionality. The placement of pricing ('$0.005') at the end is slightly abrupt without a label like 'Cost:', but every element earns its place by conveying scope, content, or cost.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by listing the specific data fields returned (EPS, revenue, beat/miss). It also notes the asset class (US equities) and cost. It could improve by mentioning data latency or look-ahead/look-back limits for the 'days' parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (both 'days' and 'symbols' are well-documented in the schema), the baseline score is 3. The description provides contextual hints ('upcoming and recent' implies the days parameter; 'US equities' implies the symbols parameter) but does not explicitly elaborate on parameter semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (earnings for US equities) and specific data points provided (EPS, revenue, beat/miss signals), distinguishing it from siblings like getCompanyProfile or getSecFilings. However, it lacks an explicit action verb (e.g., 'Retrieves' or 'Lists'), using instead a descriptive fragment ('Upcoming and recent earnings...').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance through domain scoping ('US equities' excludes crypto/FX tools; 'upcoming and recent' excludes historical deep-dives), helping agents select it for near-term earnings events. However, it does not explicitly contrast with sibling alternatives like getCompanyProfile or state prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEarthquakeMonitorA
Read-onlyIdempotent
Inspect

USGS significant earthquakes (M4+) — magnitude, region, depth, tsunami risk. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to search for seismic events (1-30)
minMagnitudeNoMinimum Richter magnitude to include in results (2.0-9.0)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations establish read-only/idempotent safety, the description adds critical cost disclosure ($0.005) and data source attribution (USGS). It also previews return fields, helping agents anticipate result structure despite missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: source (USGS), scope (M4+), return fields (4 items), and cost ($0.005) delivered in a single phrase. Every token provides unique information with zero filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description compensates by enumerating return fields (magnitude, region, depth, tsunami risk). Combined with annotations covering safety profile and clear input schema, this provides sufficient context for a data retrieval tool. Minor gap: no mention of data freshness or rate limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (days and minMagnitude fully documented), the description appropriately doesn't repeat parameter details. The '(M4+)' notation implicitly references the minMagnitude default value, providing light context without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies the resource (USGS significant earthquakes), default scope (M4+), and data returned (magnitude, region, depth, tsunami risk). Among siblings dominated by financial/AI/crypto tools, this uniquely identifies the tool's geological/natural disaster monitoring purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance provided. However, the specific data source (USGS) and unique domain (seismic activity) among financial/AI siblings make the implied usage clear. Lacks guidance on differentiation from getGeopoliticalCrisis or getSpaceWeather.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEconomicCalendarA
Read-onlyIdempotent
Inspect

High-impact economic events: CPI, NFP, FOMC, GDP releases with expected vs prior values. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days ahead (and behind) to include in the calendar (1-30)
countriesNoComma-separated ISO country codes to filter events forUS,EU,GB,JP
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations by disclosing the cost ('$0.005') and hinting at the data structure returned ('expected vs prior values'). The annotations already confirm read-only, idempotent, safe operations, so the description appropriately focuses on content scope and pricing rather than repeating safety profiles.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with zero wasted words. Every element serves a purpose: the colon-delimited list specifies event types, the clause describes data fields, and the final token indicates cost. Information is front-loaded and immediately actionable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (two simple parameters with complete schema coverage) and the presence of comprehensive annotations, the description is sufficiently complete. It compensates for the missing output schema by describing the return data structure ('expected vs prior values') and specific event types covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for both parameters ('days' and 'countries'), the description appropriately relies on the schema for parameter semantics rather than repeating them. No additional parameter guidance (e.g., formatting examples for country codes) is provided in the description, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (high-impact economic events) and provides concrete examples (CPI, NFP, FOMC, GDP) that distinguish it from sibling tools like getEarnings or getIpoCalendar. However, it uses a noun phrase rather than an explicit action verb (e.g., 'Retrieve'), and doesn't explicitly differentiate from getMacroData.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like getMacroData or getEarnings. There are no stated prerequisites, exclusions, or conditions that would help an agent determine if this is the appropriate tool for a given user query.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEnergyPricesB
Read-onlyIdempotent
Inspect

Global energy prices — crude oil (WTI/Brent), natural gas, LNG, coal, electricity spot. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
commoditiesNoComma-separated energy commodities to fetch (wti, brent, natgas, lng, coal, electricity)wti,brent,natgas,lng,coal
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable domain context (global scope, specific energy types) beyond the annotations, which already indicate read-only, idempotent behavior. The '$0.005' appears to indicate cost but is unexplained, and the description omits return format details or latency expectations for an open-world tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact and front-loaded with the core value proposition (global energy prices). However, the inclusion of '$0.005' without context creates minor ambiguity about whether this is a cost, a data value, or a version number.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description adequately covers the domain scope for a simple single-parameter tool, it lacks explanation of return values (understandable given no output schema) and misses the opportunity to clarify the tool's relationship to commodity data siblings or explain the implications of the open-world hint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the 'commodities' parameter is fully documented in the schema itself. The description does not add semantic information about the parameter syntax or valid combinations, warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (global energy prices) and enumerates specific commodities covered (WTI, Brent, natural gas, LNG, coal, electricity), providing concrete scope. However, it does not explicitly differentiate from the sibling `getCommodities` tool, which may overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to select this tool versus alternatives like `getCommodities`, nor does it mention prerequisites despite the `openWorldHint` annotation suggesting external API dependencies that may affect reliability or latency.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getFearIndexB
Read-onlyIdempotent
Inspect

VIX + Fear & Greed index — market risk temperature and historical context. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoOutput format: summary for key metrics only, full for historical context and trend analysissummary
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds cost information ('$0.005') and clarifies the composite nature of the data (VIX + Fear & Greed) beyond the annotations. However, does not mention rate limits, data freshness, or what happens if market data is unavailable despite openWorldHint indicating external dependencies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact single sentence front-loaded with the core value proposition. The pricing suffix is somewhat unusual but factual. No redundant or filler text, though extreme brevity sacrifices some explanatory detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-only retrieval tool with good annotations. Mentions 'historical context' implying output scope, but lacks description of return structure or fields since no output schema exists. Acceptable but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the 'format' parameter, establishing baseline. Description adds no explicit parameter guidance, but the schema adequately documents the summary/full distinction without needing redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the specific data sources (VIX + Fear & Greed Index) and their purpose ('market risk temperature'). Distinguishes from sibling tools like getMarketSentiment by naming specific indices rather than generic sentiment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like getMarketSentiment or getMacroData. No mention of prerequisites, optimal use cases, or when the 'full' vs 'summary' format is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getFundingRoundsA
Read-onlyIdempotent
Inspect

VC and PE funding rounds — amount, investors, valuation, sector. AI startup deals highlighted. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to include funding announcements (1-180)
sectorNoIndustry sector to filter funding rounds for (e.g. ai, fintech, biotech, deeptech)ai
minAmountMNoMinimum deal size in millions USD to include (e.g. 10 for $10M+ rounds)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly, idempotent, non-destructive), so the description focuses on adding value: it discloses return fields (amount, investors, valuation, sector), hints at default filtering behavior ('AI startup deals highlighted'), and likely indicates cost ('$5'). This provides concrete behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief and front-loaded with the core resource and data fields. However, the final '$5' fragment is structurally ambiguous (cost? confidence score?) and wastes the limited space without clarification. Otherwise efficient use of a single sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description compensates effectively by enumerating the specific data fields returned (amount, investors, valuation, sector). Combined with rich annotations covering safety and the implied cost disclosure, the description provides sufficient context for a read-only data retrieval tool with three optional parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds semantic value by stating 'AI startup deals highlighted,' which explains the rationale for the default sector='ai' parameter value and guides the user toward expected usage. This contextualizes the parameter beyond the schema's technical description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (VC and PE funding rounds) and specific data points returned (amount, investors, valuation, sector). It distinguishes from getAiNews by focusing on funding rather than general news. However, the trailing '$5' is cryptic (likely cost, but ambiguous), and it fails to differentiate from the sibling getPrivateEquity tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to select this tool versus alternatives like getPrivateEquity (which overlaps conceptually) or getCompanyProfile (which may contain funding history). No prerequisites, exclusions, or workflow context is mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getFxRatesA
Read-onlyIdempotent
Inspect

Live FX rates — major pairs, minors, crypto vs USD, DXY. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
pairsNoComma-separated FX pairs or indices to fetch (e.g. EURUSD,GBPUSD,DXY)EURUSD,GBPUSD,USDJPY,AUDUSD,DXY
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnly/idempotent/destructive hints, the description adds crucial cost information ('$0.005') and data freshness characteristics ('Live') that are essential for agent decision-making but absent from structured metadata.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with zero waste—every token earns its place. It front-loads the core action ('Live FX rates'), immediately follows with scope, and ends with cost, creating an information-dense single sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter data retrieval tool with good safety annotations, the description is sufficiently complete. It covers cost, data scope, and freshness. A perfect score would require hinting at return structure or default behavior, though the absence of an output schema limits what must be described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description adds semantic value by exemplifying valid pair categories (major pairs, minors, crypto vs USD, DXY), helping agents understand the expected input format and available instruments beyond the schema's generic 'Comma-separated FX pairs' definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'Live FX rates' and specifies scope coverage (major pairs, minors, crypto vs USD, DXY), which implicitly distinguishes it from sibling tools like getCommodities or getOnchainData. However, it does not explicitly contrast with specific alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives (e.g., getCryptoDerivatives for futures vs spot FX, or getCommodities for non-currency assets). Users must infer applicability from the scope list alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getGeopoliticalCrisisB
Read-onlyIdempotent
Inspect

GDELT real-time crisis monitoring — Iran/Israel/Trump/Russia. Crisis score, escalation risk, oil/gold/USD market impact, OFAC alerts, Reddit/HN viral signals. $25.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionsNoComma-separated geopolitical regions to monitor (e.g. middle-east, ukraine, taiwan, korea)middle-east,ukraine,taiwan
includeMarketImpactNoWhether to include projected market impact on oil, gold, and USD for each crisis event
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive behavior. The description adds valuable context about data sources (GDELT, OFAC, Reddit/Hacker News), real-time nature, and specific signal types returned (viral signals, escalation risk). It does not contradict annotations and adds meaningful behavioral detail beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact and front-loaded with key capabilities. However, the trailing '$25' is unexplained noise that provides no value to an AI agent selecting or invoking the tool. The em-dash structure creates slight ambiguity about whether Trump/Russia are monitored regions or example outputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description partially compensates by listing return data types (crisis scores, market impact, OFAC alerts). However, it lacks structural details about response format, pagination behavior, or null-result handling. The presence of runBundleGeopolitical as a sibling creates a gap that should be addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'oil/gold/USD market impact' which semantically maps to includeMarketImpact, and lists regions (Iran/Israel/Trump/Russia) that hint at the regions parameter values, but does not explicitly document parameter syntax or format beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as a GDELT-based crisis monitor and lists specific output metrics (crisis score, escalation risk, OFAC alerts, Reddit/HN signals). It distinguishes from generic market data siblings by specifying geopolitical focus and real-time monitoring. However, it fails to differentiate from the sibling runBundleGeopolitical tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this specific tool versus alternatives like runBundleGeopolitical or getMacroData. There are no prerequisites, exclusion criteria, or workflow recommendations mentioned. The cryptic '$25' at the end provides no actionable usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getGithubTrendingB
Read-onlyIdempotent
Inspect

GitHub trending repositories — by language, topic, stars today/week. AI/ML repos highlighted. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoGitHub topic tag to filter repositories by (e.g. ai, llm, agents, mcp)ai
periodNoTrending time window: daily, weekly, or monthlydaily
languageNoProgramming language to filter by (e.g. python, typescript, rust); leave empty for all
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already confirm read-only, idempotent, non-destructive behavior. The description adds critical cost information ('$0.005') and special data handling ('AI/ML repos highlighted') that agents need for decision-making but are not present in structured metadata.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at three information-dense fragments. Front-loads the core purpose immediately. Every element earns its place: scope (trending repos), filters, special behavior (AI highlighting), and cost. Minor deduction for choppy sentence fragments.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-only tool with complete schema annotations. However, lacks description of return values (what repository data is returned?) which would be helpful given no output schema exists. Cost and AI-highlighting behavior are covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters including enums and examples. The description mentions 'by language, topic' which maps to parameters but adds no additional semantic context, syntax details, or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (get trending repositories) and key filters (language, topic, time period). The 'stars today/week' phrase and 'trending' terminology distinguish it from sibling getGithubVelocity (which tracks development velocity), though it doesn't explicitly name that alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists available filters but provides no explicit guidance on when to use this tool versus alternatives like getGithubVelocity, nor any prerequisites or conditions where this tool would be inappropriate. The user must infer usage from the parameter descriptions alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getGithubVelocityB
Read-onlyIdempotent
Inspect

Company GitHub AI pivot score — new AI repos, topic changes, star velocity, commit frequency. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
orgYesGitHub organisation slug to analyse for AI activity (e.g. openai, anthropic, microsoft)
daysNoNumber of days back to measure repository activity and velocity (1-365)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive properties. The description adds valuable context by disclosing the specific metrics calculated (star velocity, commit frequency, etc.) and includes '$5' which appears to indicate pricing/cost information—critical behavioral context for an agent deciding whether to invoke the tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently packs the core purpose and metrics. However, the trailing '$5' is structurally awkward and unexplained (appears to be pricing), which slightly reduces clarity without aiding agent decision-making.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description adequately explains what metrics are analyzed but fails to describe the return format, structure, or whether the 'score' is a single number, object, or report. For a tool with pricing implications and complex aggregated data, this gap leaves agent uncertainty about result handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for both 'org' and 'days' parameters, the schema carries the semantic burden. The description mentions 'Company' which loosely maps to the 'org' parameter, but adds no syntax details, validation rules, or usage examples beyond what the schema already provides. Baseline 3 is appropriate per scoring rubric.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the resource (GitHub), the specific focus (AI pivot score), and the constituent metrics (new AI repos, topic changes, star velocity, commit frequency). It implicitly distinguishes from sibling 'getGithubTrending' by emphasizing AI-specific pivot analysis rather than general trending, though it could explicitly contrast the two.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to select this tool versus alternatives like 'getGithubTrending' or other AI intelligence tools (e.g., 'getAiNews', 'getAiPatents'). There is no mention of prerequisites, optimal use cases, or when to avoid this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getHedgeFundsB
Read-onlyIdempotent
Inspect

Hedge fund 13F filings — top holdings, new positions, exits, sector rotation signals. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
fundNoHedge fund name to filter 13F filings for (e.g. 'Bridgewater'); empty returns top funds
sectorNoSector to focus position analysis on (e.g. technology, healthcare, energy)technology
quarterNoFiling quarter to retrieve (e.g. 2024Q4, 2025Q1, or 'latest')latest
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). The description adds valuable operational context: the '$5' suggests a cost/credit charge, and listing specific data categories (exits, rotation signals) clarifies what '13F filings' entails. However, it omits rate limits, data freshness, or behavior when no filings exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact single-sentence structure with high information density. The front-loaded data categories are useful. Deduction for the unexplained '$5' suffix which creates ambiguity rather than clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a read-only data retrieval tool with complete input schema annotations. Missing output format description (JSON structure, pagination) which would help since no output schema is provided. The unexplained cost indicator leaves an information gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters (fund, sector, quarter) including examples and defaults. The description adds no parameter-specific guidance, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (hedge fund 13F filings) and specific data points returned (top holdings, new positions, exits, sector rotation signals). It distinguishes from sibling getSecFilings by specifying 'Hedge fund' scope. However, the dangling '$5' is cryptic (presumably cost) and the description lacks an explicit action verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like getSecFilings or getInsiderTrades. No mention of prerequisites (e.g., whether specific fund names are required) or when to prefer 'latest' vs specific quarters. Usage must be inferred from parameter names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getInsiderTradesA
Read-onlyIdempotent
Inspect

SEC Form 4 insider buys/sells — bullish/bearish signal for any US stock. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to search for insider transactions (1-180)
symbolNoStock ticker to filter insider trades for (e.g. NVDA); leave empty for market-wide view
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, establishing the safety profile. The description adds domain-specific context by specifying 'Form 4' (a specific SEC filing type) and framing the data as trading signals. However, it does not disclose rate limits, data freshness, or the significance of '$0.005' (appears to be pricing but is cryptic).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded with the critical domain identifier (SEC Form 4). Every word serves a purpose, though the trailing '$0.005' is opaque and unexplained, slightly detracting from clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter schema and strong annotation coverage, the description adequately covers the tool's purpose. However, with no output schema provided, the description should ideally characterize the return data structure or content, which it omits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both 'days' and 'symbol' parameters. The description adds no explicit parameter guidance, meeting the baseline score of 3 for cases where the schema carries the full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifically identifies 'SEC Form 4 insider buys/sells' as the resource, distinguishing it from the more general 'getSecFilings' sibling tool. It clearly states the domain (US stocks) and implies the analytical value (bullish/bearish signal).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by characterizing the data as a 'bullish/bearish signal,' suggesting trading analysis use cases. However, it lacks explicit guidance on when to use this versus siblings like getSecFilings or getAnalystRatings, and omits prerequisites or filtering recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getIpoCalendarA
Read-onlyIdempotent
Inspect

Upcoming and recent IPOs — size, pricing, market cap, sector. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoWindow in days (past and future) to include IPO events (1-90)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations cover safety profile (readOnly, idempotent, non-destructive), the description adds valuable context about the data returned (specific fields: size, pricing, market cap, sector) and includes pricing information ('$0.005') not present in structured fields. It does not contradict the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise and front-loaded. Every element serves a purpose: the dash introduces the return fields, and the price indicates cost. No filler words or redundant explanations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only retrieval tool with one optional parameter and no output schema, the description adequately covers the return value structure by listing the key data fields. It appropriately delegates time-window specifics to the schema and safety properties to annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'days' parameter, the schema fully documents the input. The description adds no parameter-specific context, but baseline 3 is appropriate since the schema carries the full burden effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (IPOs) and the specific data returned (size, pricing, market cap, sector), distinguishing it from siblings like getEarnings or getCompanyProfile. However, it lacks an explicit action verb (e.g., 'Retrieve'), relying on the tool name to imply the operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., getSecFilings for pre-IPO disclosures or getCompanyProfile for post-IPO data). There are no prerequisites, conditions, or exclusion criteria mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getJobPivotsA
Read-onlyIdempotent
Inspect

Companies hiring agentic AI roles — Greenhouse, Lever, HN Who's Hiring, Remotive. Buyer intent signal. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
rolesNoList of job titles or role keywords to search for as AI hiring signals
companiesNoOptional list of specific company names to filter job pivot results for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds significant context beyond annotations: discloses specific third-party data sources (Greenhouse, Lever, etc.), clarifies the business interpretation ('Buyer intent signal'), and notes the cost ('$5'). Annotations already cover safety (readOnly/idempotent), so these additions provide valuable operational transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at three short sentences/fragments. Front-loaded with the core subject matter, followed by data sources, business value, and cost. Zero redundancy or filler—every element (sources, intent signal, pricing) serves agent decision-making.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple read-only tool with good annotations and no output schema. Covers data provenance, business meaning, and cost. Minor gap: does not explicitly describe the return structure (e.g., list of companies vs. job postings), though 'Companies hiring' implies the entity type returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'agentic AI roles' which conceptually maps to the 'roles' parameter, but adds no syntax guidance, format examples, or constraints beyond what the schema already provides for the 'roles' and 'companies' arrays.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Identifies the resource (companies hiring for agentic AI roles) and specific data sources (Greenhouse, Lever, HN Who's Hiring, Remotive). The 'Buyer intent signal' adds business context. Lacks an explicit action verb (relies on 'get' from the tool name), and 'Job Pivots' terminology remains undefined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context via 'Buyer intent signal,' suggesting when this intelligence is valuable (detecting purchasing intent through hiring patterns). However, it fails to explicitly differentiate from siblings like getB2bIntel or getCompanyProfile, or state when NOT to use this versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getMacroDataB
Read-onlyIdempotent
Inspect

US Fed rate, CPI, M2, unemployment, GDP, yield curve, G10 FX, rate environment signal. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
countriesNoComma-separated ISO country codes to include macro data forUS,CN,EU,JP,GB
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, idempotent, and non-destructive traits. The description adds cost transparency ('$0.005') and specific data scope (rate environment signal), though it doesn't clarify data freshness, rate limits, or the exact meaning of 'rate environment signal'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact single sentence. Data types are front-loaded. The pricing suffix ('$0.005') is somewhat incongruous for a tool description but doesn't significantly bloat the text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-only data fetcher with good safety annotations. Lists key data categories covered but omits return format details (no output schema exists to compensate) and doesn't clarify relationship to the macro bundle sibling tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the 'countries' parameter. The description implicitly references the parameter through examples ('US', 'G10 FX') but adds no explicit syntax guidance or semantic explanation beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Lists specific macroeconomic indicators (Fed rate, CPI, M2, unemployment, GDP, yield curve, G10 FX) clearly indicating economic data retrieval. However, it fails to distinguish from similar siblings like runBundleMacroGlobal or getEconomicCalendar which may overlap in scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like runBundleMacroGlobal (which likely aggregates macro data) or getEconomicCalendar. The '$0.005' appears to be pricing but offers no context on cost tradeoffs versus other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getMarketMoversC
Read-onlyIdempotent
Inspect

Top gainers, losers, most active stocks with volume surge signals. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoWhich mover category to return: gainers, losers, active (by volume), or allall
limitNoNumber of stocks to return per category (1-50)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is safe, idempotent, and read-only. The description adds value by specifying 'volume surge signals,' indicating processed/derived data rather than raw market data. However, it fails to explain the cryptic '$0.005' (price threshold? cost?), data freshness, or source reliability.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded with the key categories, but the trailing '$0.005' appears to be a fragment or unexplained threshold that creates confusion. It is concise but suffers from an ambiguous structure that leaves the reader uncertain about the tool's cost or filtering behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description should explain the return format (e.g., list of stocks with prices, percentages, volumes). It fails to describe what data structure is returned or what 'volume surge signals' specifically contain, leaving significant gaps for an agent trying to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions the categories (gainers, losers, active) which aligns with the 'type' enum values, reinforcing the schema, but does not add syntactic details, validation rules, or usage examples beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool retrieves 'Top gainers, losers, most active stocks' and mentions 'volume surge signals,' providing specific resources and behavioral context. However, it loses a point for the unexplained '$0.005' fragment and failing to differentiate from similar siblings like getTradingSignal or getMarketSentiment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., getTradingSignal for signals vs. this for raw movers). There are no prerequisites, exclusion criteria, or workflow context to help an agent decide if this is the right tool for the user's intent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getMarketSentimentB
Read-onlyIdempotent
Inspect

Fear & Greed index, CoinGecko global market data, asset prices, trending tokens, RISK_ON/RISK_OFF signal. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetsNoComma-separated list of asset tickers to include (e.g. BTC,ETH,GOLD)BTC,ETH,VIRTUAL,GOLD,SOL
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, non-destructive properties. The description adds the cost hint '$0.005' and confirms external data sources (CoinGecko), but does not disclose data freshness, caching behavior, or how the RISK_ON/RISK_OFF signal is calculated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence. Information is front-loaded with the data types returned. The '$0.005' cost indicator is slightly cryptic without a 'Cost:' prefix, but every element conveys necessary information without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description must carry the burden of explaining return values. It enumerates the data components returned (fear index, prices, trending, etc.) but does not specify the structure, format, or nesting of the response object, leaving gaps in understanding the exact payload shape.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the 'assets' parameter is fully described in the schema with default values and format). The description does not mention parameters at all, so it neither adds nor subtracts from the schema documentation, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Lists specific data resources retrieved (Fear & Greed index, CoinGecko data, asset prices, trending tokens, RISK_ON/RISK_OFF signal), making the tool's scope clear. However, it lacks an explicit verb ('Retrieves...') and does not differentiate from sibling 'getFearIndex' which appears to overlap on Fear & Greed functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus specialized siblings like 'getFearIndex' or 'getTradingSignal'. No mention of prerequisites, rate limits, or caching considerations for the market data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getMergerActivityB
Read-onlyIdempotent
Inspect

M&A activity — announced deals, rumored targets, sector consolidation signals. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to scan for merger and acquisition announcements (1-180)
sectorNoIndustry sector to focus M&A intelligence on (e.g. tech, finance, healthcare, energy)tech
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds the '$0.005' cost information, which is valuable behavioral context not present in the annotations. However, given the readOnlyHint and openWorldHint annotations already establish safety and external data sourcing, the description fails to add crucial details about data freshness, source attribution, or return structure that would help an agent interpret results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core resource ('M&A activity') and efficiently lists subtypes via em-dash without filler words. While the pricing suffix '$0.005' is somewhat disjointed from the descriptive content, the overall structure respects the agent's attention by delivering the essential domain scope immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description enumerates specific signal types (deals, rumors, consolidation) which partially compensates for the missing output schema, but it provides no indication of data format, volume, or temporal granularity. For a financial intelligence tool returning complex market data, this leaves a significant gap in predicting the response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage—documenting 'days' (1-180 range) and 'sector' (industry examples)—the schema fully carries the parameter semantics burden. The description focuses on output content rather than input parameters, meeting the baseline expectation for high-coverage schemas without adding supplementary usage context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies specific data domains ('announced deals, rumored targets, sector consolidation signals') that distinguish this from sibling tools like getFundingRounds or getPrivateEquity. It avoids tautology by expanding 'M&A' into concrete signal types. However, it lacks an explicit action verb (e.g., 'Retrieve'), which prevents a top score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to select this tool over alternatives like getB2bIntel, getPrivateEquity, or getCompanyProfile for acquisition data. There are no prerequisites mentioned, no filtering recommendations, and no comparison to related intelligence tools in the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getModelPricesB
Read-onlyIdempotent
Inspect

AI model pricing comparison — cost per 1M tokens across OpenAI, Anthropic, Google, Mistral, Groq. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
providersNoComma-separated AI providers to include in price comparison (e.g. openai,anthropic,google)openai,anthropic,google,mistral,groq
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive behavior. The description adds valuable context about the data scope (specific providers covered) and the pricing unit (per 1M tokens). The trailing '$0.005' suggests the output format (dollar amounts) but is ambiguous whether it's an example or minimum value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact and front-loaded with the core function ('AI model pricing comparison'). The em-dash effectively separates the action from the scope details. The only inefficiency is the isolated '$0.005' fragment at the end, which lacks clear syntactic connection to the rest of the sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter lookup tool with strong annotations, the description is adequate. It conveys the subject matter (pricing), the granularity (per 1M tokens), and the provider coverage. While no output schema exists, the description implies tabular/comparative return data sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'providers' parameter, the baseline is 3. The description reinforces the parameter by listing valid provider names in the text (OpenAI, Anthropic, etc.), aligning with the schema's comma-separated string format, but does not add significant semantic meaning beyond the schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs a 'pricing comparison' for 'AI models' with specific cost units ('per 1M tokens') and lists the covered providers (OpenAI, Anthropic, Google, Mistral, Groq). However, it does not explicitly differentiate from the sibling tool 'getAiTokens', which could be confused as having overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like 'getAiTokens' or other data retrieval tools. There are no stated prerequisites, exclusion criteria, or conditional logic for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getNftMarketA
Read-onlyIdempotent
Inspect

NFT market conditions — floor prices, volume, blue chip sentiment, wash trade detection. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
collectionsNoComma-separated OpenSea collection slugs to analyse (e.g. cryptopunks,azuki)cryptopunks,bored-ape-yacht-club,azuki
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and openWorldHint. The description adds valuable context by disclosing the cost ('$0.005') and the specific types of analysis performed (wash trade detection, sentiment analysis), though it omits rate limits or error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—one efficient fragment listing the resource, specific metrics, and pricing. Every element serves a purpose; there is no redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional parameter), strong annotations, and the description's enumeration of return values (compensating for the lack of output schema), the description is sufficiently complete. The inclusion of pricing information adds necessary operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter 'collections' is fully documented in the schema itself. The description makes no mention of parameters, relying entirely on the schema. This meets the baseline score of 3 for high schema coverage scenarios.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (NFT market) and specific data points returned (floor prices, volume, blue chip sentiment, wash trade detection), distinguishing it from siblings like getCryptoDerivatives or general market tools. However, it uses a noun phrase ('NFT market conditions') rather than an explicit action verb like 'Retrieve'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description lacks explicit 'when to use/when not to use' statements or named alternatives, the specificity of the metrics listed (wash trade detection, floor prices) implies usage for NFT-specific analysis versus general crypto or market data tools. It does not clarify boundaries with overlapping siblings like getOnchainData.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getOnchainDataB
Read-onlyIdempotent
Inspect

Bitcoin fees/mempool/hashrate, Ethereum gas oracle, DeFi TVL from 500+ protocols, top yield opportunities. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain to query: all, btc (Bitcoin), eth (Ethereum), or defi (DeFi protocols)all
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations: the '$0.005' indicates a cost-per-use model not captured in structured fields, and '500+ protocols' communicates data scope. Annotations already establish read-only/idempotent safety, so the description appropriately focuses on economic and coverage traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact and front-loaded; every element specifies a data category or cost. However, the trailing '$0.005' fragment lacks syntactic context (cost vs. fee), slightly diminishing clarity despite the brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool aggregates 500+ protocols across multiple chains, the description provides only a high-level category list without explaining data granularity, update frequency, or response structure. Adequate but minimal given the complexity and lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'chain' parameter, the schema carries the full burden of parameter documentation. The description mentions 'Bitcoin,' 'Ethereum,' and 'DeFi' which implicitly map to enum values, but adds no explicit parameter guidance, syntax, or usage patterns beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Lists specific data resources (Bitcoin fees/mempool/hashrate, Ethereum gas oracle, DeFi TVL, yield opportunities) giving a clear picture of what on-chain data is retrieved. However, it does not explicitly differentiate from sibling 'getDefiYields' which appears to overlap with the yield/DeFi functionality mentioned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'getDefiYields' or 'getCryptoDerivatives', nor does it explain when to select specific chain values (btc vs eth vs defi) or prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getOptionsFlowB
Read-onlyIdempotent
Inspect

Unusual options activity — volume/OI spikes on SPY, QQQ, NVDA, TSLA. Dark pool + sweep detection. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoUnderlying stock ticker to scan for unusual options activitySPY
minPremiumNoMinimum total premium (USD) for a contract to qualify as unusual activity
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare the operation is read-only and idempotent, the description adds valuable behavioral context about what constitutes 'unusual' activity (volume/OI spikes, dark pool detection, sweep detection). This domain-specific logic helps the agent understand the tool's filtering criteria beyond the generic safety hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is admirably compact and front-loaded with key terms. However, the unexplained '$0.005' fragment wastes valuable descriptive space without clarifying whether this refers to pricing, minimum increments, or data granularity, detracting from an otherwise tight structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a two-parameter data retrieval tool with no output schema and comprehensive safety annotations, the description provides sufficient domain context. It adequately explains what type of unusual activity is detected without needing to elaborate on return values, which would be the responsibility of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters (symbol and minPremium). The description references specific tickers (SPY, QQQ, etc.) which loosely illustrates the symbol parameter, but doesn't add necessary semantic detail beyond what the schema provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's domain (unusual options activity) and specific signals detected (volume/OI spikes, dark pool, sweep detection). However, it lacks an explicit action verb (e.g., 'Retrieve' or 'Scan') and the '$0.005' reference is cryptic without context, slightly muddying the purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus siblings like getTradingSignal, getInsiderTrades, or getMarketMovers. The specific ticker list (SPY, QQQ, NVDA, TSLA) suggests example inputs but doesn't explicitly state these are supported symbols or when this tool is preferred over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getPrivateEquityB
Read-onlyIdempotent
Inspect

Private equity and VC deal flow — funding rounds, exits, dry powder, sector focus. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to include private equity and venture capital deals (1-180)
sectorNoIndustry sector to filter PE/VC deals for (e.g. ai, fintech, biotech, crypto)ai
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds the cost information ('$0.005') not present in annotations, and lists specific data categories returned (funding rounds, exits, dry powder). Annotations already cover safety profile (readOnly, non-destructive, idempotent), so the description's burden is lower, but it does not address rate limits, caching, or null-result behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely compact (one sentence plus pricing) and front-loaded with the core subject. The em-dash efficiently lists data categories. However, the pricing token ('$0.005') is somewhat incongruous in a functional description, slightly disrupting the flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter tool with no output schema, the description adequately lists the semantic content of returns (funding rounds, exits, dry powder). However, it lacks structural information about the response format (array vs object) and does not mention default behaviors or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'sector focus' which aligns with the 'sector' parameter, but does not add syntax details, examples, or clarify the 'days' parameter beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (private equity and VC deal flow) and specific data domains covered (funding rounds, exits, dry powder, sector focus). However, it does not explicitly differentiate from the sibling tool 'getFundingRounds', which creates potential ambiguity about when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., getFundingRounds), prerequisites, or exclusions. The description only lists data content without contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getRealEstateMarketA
Read-onlyIdempotent
Inspect

US real estate market data — home prices, mortgage rates, inventory, regional trends. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionNoUS region or metro area to get real estate data for (e.g. national, new-york, los-angeles, miami)national
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations cover safety profile (readOnly, idempotent), the description adds critical behavioral context: the cost per use ($0.005) and explicit enumeration of data dimensions returned, clarifying scope beyond the generic 'market data' implied by the tool name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-sentence structure. Front-loads the domain (US real estate), enumerates specific data points with em-dashes, and closes with pricing. Zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a low-complexity, single-parameter read-only tool. Lists the types of data returned (compensating for missing output schema) and includes cost transparency. Could be improved by noting data frequency or format, but sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'region' parameter, the baseline is met. The description mentions 'regional trends' which loosely maps to the parameter, but adds no syntax guidance or examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the domain (US real estate) and specific data categories returned (home prices, mortgage rates, inventory, regional trends). The specificity distinguishes it from broader siblings like getMacroData or getCommodities, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides cost information ('$0.005') which informs usage economics, but offers no guidance on when to select this tool versus overlapping alternatives like getMacroData or getEconomicCalendar for housing-related queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getSecFilingsB
Read-onlyIdempotent
Inspect

Real-time SEC 8-K/10-K/10-Q filings mentioning AI/autonomous operations. AI-relevance scored. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to search for relevant SEC filings (1-90)
formsNoComma-separated SEC form types to include (e.g. 8-K, 10-K, 10-Q, S-1)8-K
queryNoKeywords to search within SEC filing text (e.g. 'agentic AI autonomous')agentic AI autonomous
minScoreNoMinimum AI-relevance score threshold (0-100) for filtering results
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context beyond annotations by noting 'Real-time' (freshness guarantee) and 'AI-relevance scored' (indicating results include a relevance algorithm score). However, fails to explain the '$5' reference or describe the return format, which is notable given the lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise (three short phrases), but the '$5' fragment is unexplained noise rather than earned content. Front-loading is good (resource first), but the final period creates confusion.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 4-parameter tool with full schema coverage and good annotations. However, lacking an output schema, the description should ideally describe what the returned filings look like (fields, format) or explain the scoring mechanism mentioned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description implicitly confirms the query parameter's purpose by mentioning 'AI/autonomous operations' and the forms parameter by listing '8-K/10-K/10-Q', but adds no syntax details or format constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the resource (SEC 8-K/10-K/10-Q filings) and scope (AI/autonomous operations), distinguishing it from siblings like getAiNews or getCompanyProfile. However, the cryptic '$5' suffix creates minor confusion about whether this indicates pricing, a score threshold, or stock price context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives (e.g., getCompanyProfile for general company info) or prerequisites. No mention of when not to use or rate limits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getSemiconductorSupplyB
Read-onlyIdempotent
Inspect

Semiconductor supply chain intel — TSMC utilization, chip lead times, shortage signals by node. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
nodesNoComma-separated process nodes to analyse supply chain for (e.g. 3nm, 5nm, 7nm, 28nm)3nm,5nm,7nm,28nm
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive safety properties. The description adds valuable context not in annotations: the cost ($0.005) and specific data returned (TSMC utilization metrics). However, it omits return format details, pagination behavior, or rate limits, which would be helpful given the absence of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise and front-loaded. The single sentence efficiently packs the domain, specific metrics, and cost information with zero waste. The em-dash effectively separates the general domain from specific data points.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple, single-parameter data retrieval tool. Rich annotations cover behavioral traits, and 100% schema coverage handles parameter documentation. The description compensates for missing output schema by listing the specific data categories returned, though it could specify the response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'by node' which aligns with the 'nodes' parameter, but adds no additional semantic detail about the comma-separated format, default behavior, or valid node values beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (semiconductor supply chain) and specific data points provided (TSMC utilization, chip lead times, shortage signals), which distinguishes it from generic market data siblings like getCommodities. However, it lacks an explicit action verb (e.g., 'Retrieves'), using 'intel' as a noun phrase instead, which slightly impedes immediate comprehension of the operation type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While the specific domain (semiconductor nodes) implies usage context, there is no explicit 'use this when...' statement, comparison to getMacroData or getCommodities, or mention of prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getShippingRatesA
Read-onlyIdempotent
Inspect

Global shipping rates — Baltic Dry Index, container rates, port congestion signals. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
routesNoComma-separated shipping routes to include (e.g. asia-europe, transpacific, transatlantic)asia-europe,transpacific,transatlantic
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable context about what data is actually returned (Baltic Dry Index, container rates, port congestion signals) and includes cryptic but potentially useful cost information ('$0.005'). It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely efficient—two short phrases deliver the domain (global shipping rates), specific data points (Baltic Dry Index, container rates, port congestion), and cost hint. Every element earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description compensates by listing the specific metrics returned (Baltic Dry Index, etc.). Given the single optional parameter and read-only nature, this is sufficient, though mentioning return format (JSON structure) would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the 'routes' parameter including its default values. The description does not mention the parameter at all, but given the high schema coverage, it meets the baseline expectation without adding supplementary parameter guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool retrieves 'Global shipping rates' and enumerates specific metrics covered (Baltic Dry Index, container rates, port congestion signals). This distinguishes it from sibling commodity and forex tools like getCommodities or getFxRates by focusing specifically on maritime logistics data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the specific metrics listed (shipping-specific indices), but provides no explicit guidance on when to use this versus getCommodities or other market data tools, nor does it mention prerequisites or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getSpaceWeatherA
Read-onlyIdempotent
Inspect

NOAA KP index, solar flux, X-ray flares, geomagnetic storm alerts. Impacts HF radio, GPS, satellites. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
alertsNoWhether to include active NOAA geomagnetic storm and solar radiation alerts in the response
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical cost information ($0.005) not in annotations, and identifies data source (NOAA). Annotations already cover read-only/idempotent/destructive traits, so description appropriately focuses on domain-specific behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with three information-dense fragments. Front-loaded with data types. Minor deduction for '$0.005' lacking unit context (per call?).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately complete for a simple data retrieval tool: covers returned data, cost, and use cases. No output schema exists, but description enumerates return values sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'alerts' parameter. Description mentions 'geomagnetic storm alerts' as returned data but doesn't explicitly link it to the parameter or explain the boolean toggle behavior. Baseline 3 is appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Lists specific data returned (NOAA KP index, solar flux, X-ray flares, geomagnetic storm alerts) and distinguishes clearly from 50+ financial/crypto siblings by being the only space weather tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage context by stating real-world impacts (HF radio, GPS, satellites), but lacks explicit 'use when' guidance or comparisons to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getStablecoinsB
Read-onlyIdempotent
Inspect

Stablecoin health monitor — peg deviation, supply changes, depeg risk scores for USDT/USDC/DAI/FRAX. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokensNoComma-separated stablecoin symbols to monitor (e.g. USDT,USDC,DAI,FRAX)USDT,USDC,DAI,FRAX
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, safe operations, so the description appropriately focuses on what data is returned (the three specific metrics). The '$0.005' likely indicates a deviation threshold, adding behavioral context about sensitivity, though it is overly terse. It does not contradict the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact and front-loaded with the core function, but the standalone '$0.005' fragment fails to earn its place—it is ambiguous whether this refers to a price deviation threshold, query cost, or something else, creating confusion rather than clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description appropriately compensates by detailing the three specific data points returned (peg deviation, supply changes, depeg risk scores). Combined with comprehensive annotations and a simple single-parameter input, this provides sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description reinforces the valid token values by listing USDT/USDC/DAI/FRAX in the prose, which adds semantic context beyond the schema's 'comma-separated symbols' description, but does not fully expand on format constraints or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (stablecoins) and specific metrics provided (peg deviation, supply changes, depeg risk scores) along with the specific assets covered (USDT/USDC/DAI/FRAX), effectively distinguishing it from sibling tools like getCryptoDerivatives or getDefiYields. However, the cryptic '$0.005' fragment prevents a perfect score as its meaning (threshold? cost?) is unexplained.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no explicit guidance on when to use this tool versus alternatives (e.g., getOnchainData for broader chain metrics), nor are there prerequisites or conditions mentioned. The metrics listed imply a use case for risk monitoring, but no direct guidelines are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getTokenUnlocksB
Read-onlyIdempotent
Inspect

Upcoming token vesting unlocks — supply pressure signals for crypto assets. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days ahead to scan for token unlock events (1-180)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, non-destructive, and open-world characteristics. The description adds domain-specific behavioral context by clarifying these are 'vesting' unlocks that create 'supply pressure'—explaining what the data represents economically. It does not contradict annotations, though the '$0.005' remains unexplained (possibly pricing).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence plus a fragment) and front-loaded with the core concept ('Upcoming token vesting unlocks'). The structure is efficient with no filler text. Minor deduction for the dangling '$0.005' which lacks context and could confuse (appears to be pricing metadata rather than description content).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool with good annotations, the description adequately covers the retrieval target. However, without an output schema, it could benefit from briefly indicating the return structure (e.g., 'returns schedule of unlock amounts and dates'). The '$0.005' fragment suggests missing context. It meets minimum viability but leaves gaps for an agent expecting to understand the data shape.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'days' parameter (including default value and valid range 1-180), the schema carries the full burden of parameter documentation. The description mentions no parameters, which is acceptable given the complete schema coverage, meeting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (token vesting unlocks) and their significance (supply pressure signals). It expands the tool name with domain-specific context ('vesting', 'supply pressure'). The '$0.005' fragment is cryptic but doesn't obscure the core purpose. It implicitly distinguishes from sibling tools like getAiTokens or getOnchainData by focusing on supply pressure implications rather than general token data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'supply pressure signals' implies the analytical use case (assessing potential sell pressure from vesting cliffs), providing implied usage context. However, it lacks explicit guidance on when to prefer this over related siblings like getOnchainData or getCryptoDerivatives, and doesn't mention the default 30-day lookahead behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getTradingSignalA
Read-onlyIdempotent
Inspect

BUY/SELL/HOLD signal for Gold, BTC, FX. Returns entry, stop loss, take profit, R:R, RSI, EMA, ATR, confidence. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoTrading instrument symbol (e.g. XAUUSD, BTCUSD, EURUSD)XAUUSD
timeframeNoChart timeframe: 1m, 5m, 15m, 1h, 4h, 1d, 1w1h
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent/destructive hints, but the description adds crucial behavioral context: the '$0.005' pricing/cost per call and the detailed breakdown of returned technical indicators (R:R, RSI, EMA, ATR, confidence) which compensates for the missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient three-sentence structure: asset scope and action, return value specification, and cost disclosure. Zero redundancy; every element earns its place with critical information front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description comprehensively documents return values (entry, stop loss, take profit, technical indicators). Combined with complete input schema coverage and annotations, the description provides sufficient context for invocation despite moderate complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While input schema has 100% description coverage (baseline 3), the description adds valuable semantic mapping by listing 'Gold, BTC, FX' which helps agents understand acceptable symbol values (XAUUSD, BTCUSD, EURUSD) beyond the generic schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific action 'BUY/SELL/HOLD signal' and clearly delineates target resources (Gold, BTC, FX). It effectively distinguishes this from sibling tools like getFxRates (raw rates) or getCommodities (commodity data) by emphasizing the actionable trading signal nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the listed return values (entry, stop loss, take profit) which indicate this is for trade execution planning. However, it lacks explicit 'when to use vs alternatives' guidance, such as contrasting with getFxRates for simple price lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getVirtualsProtocolB
Read-onlyIdempotent
Inspect

Virtuals Protocol AI agents — prices, market cap, volume, top agents by activity. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of top Virtuals Protocol agents to return (1-100)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context about return values (prices, market cap, volume, rankings) which is necessary since no output schema exists. However, the unexplained '$0.005' is cryptic—it could indicate API cost, a price floor, or token value—creating confusion. Annotations already cover read-only/idempotent safety properties.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded with the core purpose. However, the trailing '$0.005' is structurally disconnected and wastes space without context, forcing the agent to guess its meaning (cost? price threshold? token value?).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter listing tool, the description adequately covers the return data shape. However, it lacks context about what Virtuals Protocol is (a platform for tokenized AI agents), the data source, or the significance of the $0.005 figure. With no output schema, it should explain the return format more thoroughly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'limit' parameter, the schema fully documents acceptable values (1-100) and purpose. The description adds no parameter details, meeting the baseline expectation when the schema is self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (Virtuals Protocol AI agents) and the specific data returned (prices, market cap, volume, top agents by activity). It distinguishes itself from siblings like getAiTokens or getBittensor by naming the specific protocol. However, it assumes the agent knows what 'Virtuals Protocol' refers to without explanation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives like getAiTokens or getCryptoDerivatives. No prerequisites, filtering guidance, or scenario-based recommendations are provided. The agent must infer usage solely from the resource name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getWhaleTrackerB
Read-onlyIdempotent
Inspect

On-chain whale wallet movements — large BTC/ETH transfers, exchange inflows/outflows, smart money signals. $5.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain to monitor for large wallet movements: btc, eth, or allall
minValueUSDNoMinimum transaction value in USD to classify as a whale movement (e.g. 1000000 for $1M)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, idempotent, and non-destructive traits. The description adds context about specific signal types (exchange flows) but introduces ambiguity with '$5' (possibly indicating cost, but unclear). It does not disclose rate limits, latency, or whether data is real-time vs historical.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence structure with em-dash is efficient and front-loaded, but the trailing '$5' lacks context and wastes interpretive effort. Every word should earn its place; the final notation creates confusion without clear payoff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter read-only data tool with full schema coverage and safety annotations. However, the absence of output schema description (acceptable per rules) and the unexplained '$5' notation leave gaps in operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents both parameters (chain and minValueUSD). The description mentions 'BTC/ETH' which aligns with the chain enum, but adds no additional semantic details (e.g., value ranges, parameter interactions) beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the resource (whale wallet movements) and specific scope (BTC/ETH transfers, exchange inflows/outflows, smart money signals). However, it does not explicitly differentiate from the sibling tool `getOnchainData`, and the trailing '$5' is cryptic and potentially confusing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like `getOnchainData` or `getCryptoDerivatives`. No mention of prerequisites, data freshness requirements, or conditions where this tool would be inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runBundleAiEconomyB
Read-onlyIdempotent
Inspect

AI Economy Intelligence — arxiv + github trending + jobs + model prices + AI tokens + regulatory. $100.

ParametersJSON Schema
NameRequiredDescriptionDefault
focusNoKeywords describing the AI economy area to focus the bundle on (e.g. 'agentic ai autonomous')agentic ai autonomous
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly, non-destructive) and idempotency. The description adds critical behavioral context: the cost ('$100') and the specific external sources queried. However, it omits details about failure modes (what if arxiv is down?) and response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact with zero filler words. The telegraphic style ('arxiv + github trending...') efficiently conveys scope. However, the terminal '$100' is ambiguous (cost? tier? confidence score?) and lacks context, slightly undermining the clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple bundle tool but missing explanation of the '$100' value and the value proposition of using the bundle versus individual source tools. No output schema exists, so description does not need to explain return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'focus' parameter fully documented ('Keywords describing the AI economy area...'). The description does not mention the parameter, but since the schema is self-explanatory, no additional description is needed. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Lists specific data sources (arxiv, github trending, jobs, model prices, AI tokens, regulatory) that clearly scope the 'AI Economy' domain, distinguishing it from sibling bundles like runBundleCryptoAlpha. However, it uses a noun phrase ('AI Economy Intelligence') rather than an explicit action verb (e.g., 'Aggregates' or 'Retrieves').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this bundle versus the granular sibling tools (getArxivResearch, getGithubTrending, getJobPivots, etc.) that provide the constituent data. Does not explain the trade-off between bundle convenience and individual tool specificity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runBundleCompanyDeepA
Read-onlyIdempotent
Inspect

Company Deep Dive — profile + competitor intel + hedge funds + analyst ratings + filings. $50.

ParametersJSON Schema
NameRequiredDescriptionDefault
githubNoGitHub organisation slug for the company; leave empty to auto-detect
companyYesCompany name to run the full deep-dive intelligence bundle on
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly, idempotent, non-destructive), but the description adds critical behavioral context: the $50 cost (essential for budget-aware decisions) and the composite nature of the operation (multiple data sources aggregated). It could further improve by mentioning latency implications or cache behavior of bundled requests.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise—every element earns its place. The em-dash format efficiently lists components, and the price ($50) is appropriately positioned as a critical constraint. Zero redundancy or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and the tool's complexity as a multi-source bundle, the description adequately lists included data sources but lacks details on return structure, data freshness, or how results are organized (e.g., nested objects vs. flat arrays). It meets minimum viability but leaves gaps for an agent predicting output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured schema already fully documents both parameters (company name and optional GitHub slug). The description adds no parameter-specific guidance, which is acceptable per scoring guidelines when schema coverage is high, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states this is a comprehensive 'Company Deep Dive' bundle and explicitly enumerates the included data sources (profile, competitor intel, hedge funds, analyst ratings, filings). These components directly map to sibling tools (getCompanyProfile, getCompetitorIntel, etc.), effectively distinguishing this as an aggregated alternative to individual lookups.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies comprehensive research use cases by listing multiple data sources, it lacks explicit guidance on when to use this $50 bundle versus free individual tools (getCompanyProfile, getSecFilings, etc.). It does not specify trade-offs like cost-benefit analysis or data freshness compared to individual calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runBundleCryptoAlphaC
Read-onlyIdempotent
Inspect

Crypto Alpha Pack — onchain + whale tracker + DeFi yields + AI tokens + derivatives + stablecoins. $25.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainsNoComma-separated blockchain networks to include in the crypto alpha bundlebtc,eth
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds no behavioral context beyond the annotations (readOnly, idempotent, destructive). Does not explain what 'bundle' means (single aggregated call vs. multiple sequential calls), data freshness, or the aggregation logic applied to the disparate data sources listed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely brief (one sentence), but includes irrelevant marketing content ('$25') that adds noise without aiding tool selection. Structure is front-loaded with the pack name, yet lacks functional specificity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, and the description fails to compensate by describing return structure, data format, or what the 'bundle' aggregation contains. For a composite tool spanning six distinct data domains, this is insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single 'chains' parameter, establishing a baseline of 3. The description provides no additional parameter guidance (e.g., supported chain values, default behavior when omitted), but none is required given comprehensive schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies the domain ('Crypto Alpha') and lists covered data categories (onchain, whale tracker, DeFi yields, etc.), but fails to specify the tool's action or output (e.g., 'retrieves aggregated data,' 'generates report'). The '$25' pricing detail is irrelevant for agent decision-making.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this bundled tool versus individual specialized tools (e.g., getWhaleTracker, getDefiYields, getOnchainData). Given the extensive sibling list including specific crypto data tools, explicit differentiation is needed but absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runBundleGeopoliticalB
Read-onlyIdempotent
Inspect

Geopolitical War Room — GDELT Iran/Israel/Trump monitoring + crisis scores + oil/gold/USD/defense market impact + OFAC alerts + Reddit/HN signals. $200.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoAnalysis depth: standard (fast overview) or deep (full GDELT + social signal processing)deep
regionsNoComma-separated geopolitical regions to cover in the war room bundlemiddle-east,ukraine,taiwan
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive safety. The description adds valuable context about data sources (GDELT, OFAC, social signals) and implies computational intensity via 'deep' analysis. The '$200' suggests cost implications not captured elsewhere. However, it omits latency expectations, data freshness, or return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence front-loaded with the core concept. It efficiently lists capabilities using shorthand ('oil/gold/USD/defense'). The trailing '$200' is structurally odd and unexplained, slightly reducing clarity, but overall it avoids waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex aggregation tool with no output schema, the description adequately lists covered entities (Iran/Israel/Trump) and signal types. However, it fails to explain the 'bundle' execution model, caching behavior, or how the crisis scores are calculated, leaving operational gaps given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description does not mention the depth or regions parameters explicitly, but the schema fully documents them ('standard' vs 'deep' analysis, comma-separated regions). No additional semantic context is provided beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies this as a comprehensive monitoring bundle using specific data sources (GDELT, Reddit, Hacker News) and outputs (crisis scores, OFAC alerts, market impact). The 'bundle' framing distinguishes it from single-purpose siblings like getGeopoliticalCrisis, though it doesn't explicitly contrast with other runBundle* tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this comprehensive bundle versus simpler alternatives like getGeopoliticalCrisis or versus other bundles like runBundleSovereign. There are no prerequisites, cost explanations (the '$200' is cryptic), or exclusion criteria mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runBundleMacroGlobalB
Read-onlyIdempotent
Inspect

Global Macro Pack — macro + FX + interest rates + inflation + consumer + labor data. $50.

ParametersJSON Schema
NameRequiredDescriptionDefault
countriesNoComma-separated ISO country codes to include in the global macro bundleUS,EU,JP,GB,CN
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover safety profile (readOnly, idempotent, non-destructive) and external dependencies (openWorldHint). The description adds valuable cost disclosure ('$50') and clarifies the scope of data returned, but omits other behavioral traits like response format, data freshness, or volume expectations given the lack of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise at one sentence. Every element serves a purpose: the dash-delimited list efficiently communicates data coverage, and the price warning is front-loaded. No redundant or filler text is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately identifies return data categories but fails to describe the structure, format, or granularity of the returned bundle. For a paid tool (indicated by '$50'), the description is minimally viable but lacks richness about the actual deliverable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'countries' parameter, the schema fully documents acceptable inputs. The description adds no parameter-specific guidance, meeting the baseline expectation for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies the tool as a 'Global Macro Pack' and enumerates specific data categories included (FX, interest rates, inflation, consumer, labor), distinguishing it from sibling tools like runBundleCryptoAlpha or getMacroData. However, it lacks an explicit verb (e.g., 'Retrieve') to unambiguously signal the read operation, though the intent is clear from context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like getMacroData (single indicators) or other runBundle* tools. While the '$50' implies cost considerations, there is no discussion of selection criteria, prerequisites, or when to prefer this comprehensive bundle over specific data getters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runBundleMarketIntelB
Read-onlyIdempotent
Inspect

Market Intelligence Pack — signals + onchain + macro + options flow + insider trades + earnings. $25.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolsNoList of trading symbols to include in the market intelligence bundle
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses the $25 cost, which is critical business logic not present in annotations. Annotations already confirm readOnly/idempotent/destructive=false status. However, description fails to explain output structure, data freshness, or how the six data sources are formatted/organized in the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely terse single-line format with zero redundancy. Data types are efficiently enumerated. However, the price disclosure at the end without context slightly disrupts informational flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex aggregation tool combining six distinct data sources with no output schema provided, the description is inadequate. It fails to describe return format, data structure, or how the aggregated results are organized (e.g., grouped by symbol or by data type).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description omits all parameter discussion, but with 100% schema description coverage for the 'symbols' parameter (including default values), the schema sufficiently documents inputs without description assistance. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Lists specific data components aggregated (signals, onchain, macro, options flow, insider trades, earnings) and identifies this as a bundled package. However, it does not differentiate from sibling bundle tools like runBundleCryptoAlpha or runBundleMacroGlobal which likely overlap in content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this paid bundle versus individual tools (getOptionsFlow, getInsiderTrades, etc.) or other bundles. The $25 cost hint suggests premium usage but lacks context on cost-benefit or selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runBundleSovereignA
Read-onlyIdempotent
Inspect

Sovereign Intelligence — ALL endpoints combined: full macro + geopolitical + company + crypto + hedge funds. $500.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionsNoGeopolitical regions to include in the sovereign intelligence sweep (all, or comma-separated list)all
companiesNoOptional list of specific companies to include deep-dive analysis for in the sovereign bundle
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive=false, but the description adds critical behavioral context: the '$500' cost and the specific data domains returned (macro, geopolitical, crypto, hedge funds), which helps the agent assess invocation cost and payload scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently delivers scope ('ALL endpoints'), specific data categories, and cost. Every element earns its place with zero redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a high-cost, complex aggregation tool without output schema, the description adequately covers the cost implication ($500) and data categories returned. It lacks specifics on return format or pagination, but the enumeration of included intelligence types provides sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for both optional parameters (regions, companies), the schema fully documents inputs. The description adds no parameter-specific guidance, meeting the baseline expectation for well-schematized tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the tool aggregates 'full macro + geopolitical + company + crypto + hedge funds' and explicitly positions it as 'ALL endpoints combined,' distinguishing it from narrower sibling bundles like runBundleMacroGlobal or runBundleGeopolitical. However, 'Sovereign Intelligence' is jargon-heavy without immediate definition.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'ALL endpoints combined' implicitly signals this is for comprehensive intelligence needs versus specific-domain bundles, but lacks explicit when-to-use guidance or comparisons to specific alternatives like runBundleCompanyDeep.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runBundleStarterA
Read-onlyIdempotent
Inspect

AI Agent Starter Pack — compliance + sentiment + signals + macro + news in one call. $0.50.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetsNoComma-separated asset tickers to include in sentiment and market dataBTC,ETH,GOLD
symbolNoPrimary trading symbol to generate signals for in the starter bundle (e.g. XAUUSD, BTCUSD)XAUUSD
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive, and open-world characteristics. The description adds valuable behavioral context not found in annotations: the specific data categories being aggregated and the explicit pricing ($0.50). This cost disclosure is critical for agent decision-making. No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—one sentence plus pricing—with zero redundancy. However, for a complex composite tool aggregating five distinct data domains, it may be overly terse; the lack of structural detail about the returned bundle leaves agents with minimal context beyond the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (aggregating multiple data sources) and the absence of an output schema, the description provides insufficient detail about return value structure or composition. While it lists included data categories, it does not describe the format, nesting, or key identifiers of the returned bundle, which agents need to parse results effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both parameters ('assets' and 'symbol'). The description text does not add semantic meaning, examples, or clarify the relationship between the symbol parameter and the assets parameter (e.g., whether symbol must be included in assets). With full schema coverage, this meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as an aggregated 'Starter Pack' that combines compliance, sentiment, signals, macro, and news data 'in one call.' This effectively distinguishes it from individual data-fetching siblings (e.g., getAiNews, getMacroData). However, it does not differentiate from other bundle siblings like runBundleCryptoAlpha or runBundleMacroGlobal, leaving ambiguity about which bundle to select.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance through the feature list (compliance + sentiment + signals + macro + news) and explicit cost information ('$0.50'), which helps agents evaluate trade-offs. However, it lacks explicit when-to-use guidance versus alternative bundles or individual tools, and does not state prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

screenSanctionsA
Read-onlyIdempotent
Inspect

Screen entities against OFAC, EU, UN, UK sanctions lists. Returns match probability and risk level. $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesEntity name, company, vessel, or wallet address to screen
countryNoISO 2-letter country code to narrow the search (e.g. IR, RU, KP)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds valuable behavioral context not in annotations: it discloses the output format ('match probability and risk level') and pricing ('$0.005'), which helps agents understand cost and result structure. No contradictions with readOnly/idempotent hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently convey: (1) core function, (2) return values, (3) cost. Front-loaded with the specific verb and scope, zero redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, explicitly stating return values ('match probability and risk level') is essential and present. Annotations cover operational safety. Only minor gaps remain (e.g., rate limits, specific match thresholds).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters (including examples like 'IR, RU, KP'). The description mentions 'entities' but adds minimal semantic detail beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the specific action ('Screen'), target resource ('entities'), and exact data sources ('OFAC, EU, UN, UK sanctions lists'), distinguishing it from sibling compliance and intelligence tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the domain is clear (sanctions screening), there is no explicit guidance on when to use this versus 'checkAiCompliance' or other risk-assessment siblings, nor prerequisites like required data formats.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.