Skip to main content
Glama

Server Details

Crypto trading intelligence. 53 MCP tools, own BTC/ETH/Kaspa nodes, x402 micropayments.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 67 of 67 tools scored. Lowest: 2.1/5.

Server CoherenceB
Disambiguation2/5

Many tools have overlapping purposes and unclear boundaries, causing significant ambiguity. For example, 'get_intel_bundle', 'get_intel_snapshot', 'get_intel_coin', and 'get_intel_history' all provide intelligence data but with vague distinctions, while 'get_ml_signal', 'get_signal_feed', 'get_signal_feed_coin', and 'get_signals' all relate to trading signals without clear differentiation. This overlap makes it difficult for an agent to reliably select the correct tool.

Naming Consistency4/5

The naming follows a consistent 'verb_noun' pattern with 'get_' or 'run_' prefixes for most tools, such as 'get_anomalies', 'get_bittensor', and 'run_backtest'. There are minor deviations like 'ask_assistant' and 'scan_arbitrage', but overall, the naming is predictable and readable across the set.

Tool Count2/5

With 67 tools, the count is excessive for the server's purpose of crypto intelligence, leading to a heavy and overwhelming surface. This many tools suggests poor scoping, as many functions could be consolidated (e.g., multiple intelligence and signal tools), making it cumbersome for agents to navigate and increasing the risk of misselection.

Completeness5/5

The tool set provides comprehensive coverage of the crypto intelligence domain, including market data, on-chain metrics, trading signals, ML analysis, and backtesting. It supports full CRUD-like operations for data retrieval and analysis across various aspects like anomalies, funding, regimes, and strategies, with no obvious gaps that would hinder agent workflows.

Available Tools

67 tools
ask_assistantAI Trading AssistantCInspect

AI crypto trading assistant — ask any market question, grounded in live data. $0.10

ParametersJSON Schema
NameRequiredDescriptionDefault
messageYesYour question about crypto markets
portfolioNoComma-separated coin list (e.g. BTC,ETH,SOL)
risk_toleranceNoRisk level: low, medium, highmedium
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It succeeds by revealing the tool costs '$0.10' and uses 'live data' (indicating real-time vs cached). However, it omits critical behavioral details like output format, whether it maintains conversation state, rate limits, or latency characteristics that would help the agent plan multi-turn interactions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at two clauses plus a cost indicator. The primary purpose ('AI crypto trading assistant') is front-loaded, and every word conveys meaningful constraints (scope: crypto, cost: $0.10, data: live). However, the '$0.10' fragments the sentence structure slightly without explanatory context (per query? per month?).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple flat schema (3 primitives) but 0% schema coverage and the existence of a similarly-named sibling tool, the description is insufficiently complete. It lacks parameter documentation and sibling differentiation that would be necessary for an agent to confidently invoke this tool over 'ask_trading_assistant' or understand how to populate the optional parameters effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, requiring the description to compensate. While 'ask any market question' semantically maps to the 'message' parameter, the description completely ignores 'portfolio' (string) and 'risk_tolerance' (string with default 'medium'), leaving their purposes, expected formats, and interaction effects undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies the tool as an 'AI crypto trading assistant' for asking 'market question[s]' with live data grounding, which provides a clear verb-resource pair. However, it fails to distinguish from the sibling tool 'ask_trading_assistant', which has nearly identical naming and likely overlapping functionality, leaving the agent without criteria to select between them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus the similar 'ask_trading_assistant' or other analysis tools like 'analyze_news_sentiment' or 'fetch_crypto_prices'. The '$0.10' implies a cost structure but doesn't state prerequisites, required context, or when to prefer this conversational interface over direct data fetch tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_anomaliesMarket AnomaliesB
Read-onlyIdempotent
Inspect

Get ML-detected market anomalies — volume spikes, correlation breaks, structural changes. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoLookback hours (1-24)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable cost transparency ('$0.005') and specifies anomaly subtypes, but omits safety profile (read-only status), return format structure, time-range limitations, or pagination behavior that agents need to invoke the tool safely.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently front-loaded with the core operation, followed by specific examples and cost information. However, the '$0.005' suffix lacks units or context (cost per call? credit consumption?), slightly diminishing clarity despite the brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the zero-parameter simplicity, the description adequately covers the invocation context but lacks output structure description (critical since no output schema exists). The failure to address the relationship with 'detect_anomalies' leaves a significant gap in contextual completeness for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters, establishing a baseline score of 4. The description appropriately does not invent parameter semantics where none exist, and the em-dash enumeration of anomaly types serves as behavioral context rather than parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Get') and resource ('ML-detected market anomalies'), and enumerates specific anomaly types (volume spikes, correlation breaks, structural changes). However, it does not explicitly distinguish from the sibling tool 'detect_anomalies', which creates potential ambiguity about whether this retrieves cached results or triggers detection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, particularly the sibling 'detect_anomalies'. The critical distinction between retrieving pre-computed ML anomalies versus running real-time detection is not clarified, leaving agents vulnerable to selecting the wrong tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_anomaly_historyAnomaly History LogA
Read-onlyIdempotent
Inspect

ML-detected market anomaly log — per anomaly: ts, symbol, score, severity, features dict, natural-language description. 340+ anomalies over 48 days. $0.03

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoLook-back days (1-90)
symbolNoFilter to single coin
severityNoMin severity: low, medium, high, critical
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide read-only, open-world, idempotent, and non-destructive hints. The description adds valuable context about the dataset size (340+ anomalies over 48 days) and cost ($0.03), which aren't covered by annotations. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise and front-loaded: first sentence defines purpose and output structure, second provides dataset context and cost. Every sentence adds essential information with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the comprehensive annotations (read-only, open-world, etc.) and full parameter documentation in the schema, the description provides good contextual completeness. It adds dataset scope and cost information. The main gap is lack of output format details since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without compensating for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving ML-detected market anomaly logs with specific fields (timestamp, symbol, score, severity, features dict, description). It distinguishes from siblings by specifying it's a historical log rather than real-time detection or other data types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the mention of '340+ anomalies over 48 days' and pricing, but doesn't explicitly state when to use this versus alternatives like 'get_anomalies' or 'get_signal_feed'. No explicit when-not-to-use guidance or sibling tool comparisons are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bittensorBittensor SubnetA
Read-onlyIdempotent
Inspect

Get Bittensor subnet metagraph — neuron incentives, emissions, stake distribution. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
netuidNoSubnet ID (1-256)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only and idempotent characteristics; the description adds valuable behavioral context by disclosing the $0.01 cost, which is critical given openWorldHint=true. It also clarifies what specific data is retrieved (incentives, emissions, stake).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence with no waste. The em-dash efficiently elaborates on metagraph contents, and the cost suffix is high-value information. Main action and resource are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema (optional, well-documented), good annotations, and lack of output schema, the description adequately covers the tool's purpose, cost, and return data categories (incentives, emissions, stake).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Subnet ID (1-256)'), the baseline score is 3. The description implies the subnet context but does not add explicit parameter semantics or usage guidance for 'netuid' beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Get'), resource ('Bittensor subnet metagraph'), and distinguishes itself from siblings like `get_subtensor` by specifying it returns metagraph data (neuron incentives, emissions, stake distribution) rather than raw blockchain data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes cost information ('$0.01') but provides no guidance on when to use this tool versus siblings like `get_subtensor` or `get_network_stats`, nor does it mention prerequisites or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bittensor_epochBittensor EpochA
Read-onlyIdempotent
Inspect

Current Bittensor epoch state from our own Subtensor lite node — block_number, finalized_block, epoch_number (block//360), blocks_into_epoch, blocks_until_next, progress_pct, peer_count, is_syncing. $0.003

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds valuable context by specifying the data source (Subtensor lite node) and listing the exact fields returned, which helps the agent understand what to expect. It also includes cost information ($0.003), which is not covered by annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that efficiently communicates the tool's purpose, data source, returned fields, and cost. It is front-loaded with the core functionality and includes no redundant or verbose elements, making every word earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is largely complete. It specifies the data source, lists returned fields, and includes cost, which are helpful for an agent. However, it does not explain the significance of fields like 'epoch_number (block//360)' or provide examples of typical values, leaving minor gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the returned data. Baseline for 0 parameters is 4, as the description correctly omits unnecessary parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the current Bittensor epoch state from a specific source (Subtensor lite node) and lists the exact data fields returned (block_number, finalized_block, etc.). It distinguishes from sibling tools like get_bittensor and get_bittensor_validators by focusing specifically on epoch metrics rather than general network data or validator details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying the data source (Subtensor lite node) and listing the returned fields, which helps identify when this tool is appropriate. However, it does not explicitly state when to use this tool versus alternatives like get_bittensor or get_subtensor, nor does it provide exclusion criteria or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bittensor_validatorsCross-Subnet ValidatorsA
Read-onlyIdempotent
Inspect

Active validators across all tracked Bittensor subnets. Per subnet: n_validators_active, total_validator_stake, top validators by validator_trust. Identifies voting-power concentration. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide strong behavioral hints (readOnlyHint: true, openWorldHint: true, idempotentHint: true, destructiveHint: false), covering safety and idempotency. The description adds valuable context beyond annotations by specifying the scope ('across all tracked Bittensor subnets'), the specific metrics returned, and the analytical purpose ('identifies voting-power concentration'). It also includes cost information ('$0.01'), which is not captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured in a single sentence that efficiently communicates scope, metrics, and analytical value. Every element serves a purpose: 'Active validators across all tracked Bittensor subnets' defines scope, 'Per subnet: n_validators_active, total_validator_stake, top validators by validator_trust' specifies output, 'Identifies voting-power concentration' adds analytical context, and '$0.01' provides cost information. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (analyzing validator distribution across multiple subnets), the description provides excellent context about what data is returned and its analytical purpose. Annotations cover behavioral safety, and with 0 parameters, input documentation is complete. The main gap is the lack of an output schema, so the description doesn't specify return format details like data structure or pagination, but it compensates well by listing key metrics and analytical insight.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on the tool's output semantics rather than inputs, listing the specific metrics returned (n_validators_active, total_validator_stake, top validators by validator_trust) and the analytical insight provided (voting-power concentration). This adds meaningful context beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving active validators across all tracked Bittensor subnets, with specific metrics listed (n_validators_active, total_validator_stake, top validators by validator_trust) and analysis of voting-power concentration. It distinguishes itself from sibling tools like get_bittensor and get_subnets by focusing specifically on validator data rather than general Bittensor information or subnet listings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'active validators across all tracked Bittensor subnets' and 'identifies voting-power concentration,' suggesting it should be used when validator distribution analysis is needed. However, it provides no explicit guidance on when to use this tool versus alternatives like get_subnet_leaderboard or get_bittensor_epoch, nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_blocksBlock AnalyticsA
Read-onlyIdempotent
Inspect

Get per-block Bitcoin analytics — fee distribution, SegWit adoption, UTXO changes. Max 144 blocks. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
blocksNoNumber of recent blocks to analyze (1-144)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly/idempotent), so description adds value by specifying return data categories (fee distribution, SegWit, UTXO) and operational constraint (144 block limit), plus unusual but useful cost hint ($0.01).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: single sentence with em-dash separating action from return data, followed by constraint and cost. Every element earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description compensates by listing three specific analytics categories returned. Sufficient for a single-parameter read tool; would need structural detail if output were complex nested objects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete 'blocks' parameter documentation. Description reinforces the 144-block limit but does not add syntax, format details, or semantic meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'per-block Bitcoin analytics' + specific metrics 'fee distribution, SegWit adoption, UTXO changes' clearly distinguishes this from siblings like get_btc_onchain (general) and get_utxo_set (subset).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit guidance via specific metrics and explicit constraint 'Max 144 blocks', but lacks explicit when-to-use comparisons versus alternatives like get_chain_activity or get_btc_onchain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_btc_onchainBTC On-ChainA
Read-onlyIdempotent
Inspect

Get BTC on-chain metrics — hashrate, difficulty, mempool, whale activity from own full node. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoLookback hours (1-168)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations establish the operation is read-only and idempotent, the description adds critical behavioral context: the $0.01 cost and the data provenance ('from own full node'). These details help the agent understand operational constraints and data source reliability not covered by structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence using an em-dash to separate the operation from its contents. Every element—including the cost and full-node provenance—earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter with default), the presence of safety annotations, and no output schema, the description adequately covers the essential characteristics: what data is returned, where it comes from, and what it costs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Lookback hours (1-168)'), the schema fully documents the single optional parameter. The description mentions no parameters, but the baseline score of 3 is appropriate as the schema carries the full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('BTC on-chain metrics') and enumerates exact metrics returned (hashrate, difficulty, mempool, whale activity). It distinguishes from siblings like get_mempool and get_whale_flow by implying a comprehensive data set 'from own full node,' though it could more explicitly contrast with those specific alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes pricing information ('$0.01') but provides no guidance on when to use this aggregated view versus specialized siblings like get_mempool or get_whale_flow, nor does it mention prerequisites or filtering strategies.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_chain_activityMulti-Chain ActivityB
Read-onlyIdempotent
Inspect

Get multi-chain activity overview — fees, TVL, active addresses across L1s and L2s. FREE.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter by chain name

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and openWorldHint=true. The description adds the cost trait ('FREE') and specifies what data dimensions are returned (fees, TVL, active addresses), which provides useful context beyond the annotations. However, it omits rate limiting, caching behavior, or pagination details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the action and scope. Every element earns its place: the em-dash concisely lists the specific metrics, and 'FREE' provides immediate cost context without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (per context signals) and simple single-parameter input with complete annotations, the description adequately covers what the tool returns. It appropriately omits return value details (covered by output schema) but could have mentioned the optional filtering behavior for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the 'chain' parameter is fully documented in the schema), the baseline score is 3. The description adds no additional parameter semantics, but the schema already clarifies that it is an optional filter for chain names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Get') and specific resource ('multi-chain activity overview'), and distinguishes scope with specific metrics ('fees, TVL, active addresses across L1s and L2s'). This differentiates it from siblings like get_gas_prices or get_network_stats by specifying exactly which cross-chain metrics are returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like get_network_stats or get_anomalies. There are no 'use this when' conditions, prerequisites, or warnings about overlapping functionality with the extensive sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_chart_visionChart Vision AIA
Read-onlyIdempotent
Inspect

Get AI chart vision analysis — support/resistance, pattern detection, trend strength. $0.08

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (BTC, ETH, SOL, TAO, KAS, XLM)BTC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical cost information ($0.08) absent from annotations. Specifies analysis scope beyond generic 'vision' label. Annotations already cover safety profile (readOnly, idempotent), so description's value lies in operational cost disclosure and capability specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: action statement, capability list via em-dash, cost indicator. Every element earns its place. Front-loaded with verb 'Get' and no filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter read-only tool with 100% schema coverage and safety annotations. However, for an AI analysis tool with no output schema, lacks description of return structure (JSON, text summary, or image coordinates) and confidence levels.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear coin symbols enumerated. Description adds no parameter context, which is acceptable baseline given the schema documents the default value and valid options. No mention of coin parameter behavior in description text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action (AI chart vision analysis) and concrete capabilities (support/resistance, pattern detection, trend strength). distinguishes from numerical signal siblings by emphasizing visual pattern recognition, though doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to prefer this over get_signals, get_ml_signal, or get_ohlcv. No mention of prerequisites (e.g., if specific chart timeframes are required) or cost considerations beyond the price tag.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_coin_daysCoin Days DestroyedA
Read-onlyIdempotent
Inspect

Get BTC Coin Days Destroyed — smart money movement indicator from own full node. 24h history. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoLookback hours (1-168)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context beyond annotations: data provenance ('from own full node'), implied cost ('$0.01'), and default temporal scope ('24h history'). Annotations already confirm read-only/idempotent safety, so the description's value lies in source transparency and operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently front-loaded with the core operation. Fragmentary trailing elements ('24h history. $0.01') are slightly cryptic (cost vs. data value ambiguity) but do not significantly impede parsing. Generally dense with information per word.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a focused, single-parameter read tool. The description adequately covers data source, metric interpretation, and cost despite lacking an output schema. Could improve by clarifying return format or the exact meaning of '$0.01'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (parameter 'hours' fully documented as 'Lookback hours (1-168)'), the description meets the baseline expectation. The mention of '24h history' reinforces the default value but adds no semantic depth about valid ranges or units beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource pairing ('Get BTC Coin Days Destroyed') with helpful context that it's a 'smart money movement indicator.' However, it assumes the agent knows what Coin Days Destroyed measures (coin age weighted volume) without brief conceptual explanation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through specific metric naming (distinct from generic siblings like get_btc_onchain), but lacks explicit guidance on when to choose this over other on-chain tools or what analysis questions it answers (e.g., 'use to detect long-term holder selling').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_consensusAgent ConsensusB
Read-onlyIdempotent
Inspect

Get accuracy-weighted consensus from 4 AI prediction agents for 34 coins. $0.08

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoFilter by coin (BTC, ETH, SOL)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, description carries full behavioral burden. It discloses the aggregation method (accuracy-weighted) and source (4 agents), and implies cost ($0.08), but fails to mention rate limits, authentication requirements, caching behavior, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is appropriately terse and front-loaded with core action. Minor deduction because '$0.08' hangs without explicit 'Cost:' or 'Price:' labeling, creating slight ambiguity about whether this is cost, confidence score, or something else.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool, the description adequately explains the input side (none needed) and retrieval logic. However, lacking output schema, it should at least hint at return format (e.g., list of predictions, scores) rather than leaving it entirely unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, establishing a baseline of 4 per evaluation rules. The description appropriately implies no user-configurable filters by specifying fixed scope (7 coins).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' and clearly identifies the resource (accuracy-weighted consensus) with specific implementation details (4 AI prediction agents, 7 coins). The '$0.08' appears to indicate cost but lacks context. However, it only partially distinguishes from prediction siblings like get_ml_signal or get_signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus similar analytics siblings (get_signals, get_ml_signal, get_decision, get_conviction) or what '7 coins' refers to. No prerequisites or conditions mentioned despite the apparent cost implication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_convictionConviction ScoreC
Read-onlyIdempotent
Inspect

Get conviction score (0-100) fusing consensus, extremity, confluence, ML anomaly. $0.05

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (BTC, ETH, SOL)BTC
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It partially succeeds by disclosing the score range (0-100), the fusion methodology, and implied cost ('$0.05'), but omits critical safety information (idempotency, cache behavior, failure modes) required for a data-fusion tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence structure is appropriately dense and front-loaded with the core action. However, appending '$0.05' at the end is ambiguous (cost vs. value) and slightly disrupts clarity without context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero schema descriptions, no annotations, and no output schema, the description inadequately covers the interface contract. While it explains the domain concept (conviction scoring), it fails to document the input parameter or expected return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. It completely fails to mention the `coin` parameter, its default value ('BTC'), or that it is optional. The agent must infer inputs solely from the schema structure without descriptive assistance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the conviction score (0-100) and its composite factors (consensus, extremity, confluence, ML anomaly). However, it fails to distinguish this tool from the sibling `score_conviction`, which likely performs a similar function, creating ambiguity about which to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no explicit guidance on when to use this tool versus siblings like `score_conviction`, `check_agent_consensus`, or `predict_ml_signal`. The '$0.05' token implies a cost constraint but does not constitute actionable usage guidance or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_correlationsAsset CorrelationsA
Read-onlyIdempotent
Inspect

Get cross-asset correlation matrix — BTC vs ETH, SOL, Gold, DXY over 7d and 30d windows. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoComma-separated coins (e.g. BTC,ETH,SOL)BTC,ETH,SOL
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical cost information ('$0.01') and specific coverage details (Gold/DXY assets, fixed 7d/30d windows) not present in annotations. Annotations already cover safety (readOnly/idempotent), so the cost disclosure is valuable additive context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single compact sentence with zero waste. Information is front-loaded (action + resource), with specific parameters (assets, windows) and cost efficiently packed using em-dash and comma separators. Every token earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple read-only data retrieval pattern and strong annotations, the description provides sufficient context. The $0.01 cost disclosure is essential for agent decision-making. Minor gap: doesn't describe matrix output format, though this is acceptable without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with adequate parameter description. The tool description adds semantic value by illustrating expected coin values through concrete examples (BTC, ETH, SOL, Gold, DXY), hinting at support for both crypto and traditional assets beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Get' + resource 'correlation matrix', with explicit asset examples (BTC, ETH, SOL, Gold, DXY) and time windows (7d, 30d). The inclusion of traditional assets (Gold, DXY) alongside crypto effectively distinguishes this from pure crypto tools like get_prices or get_signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clear usage context implied through concrete asset examples and time window specifications, showing this is for cross-asset correlation analysis. However, lacks explicit contrast with sibling get_macro_corr which could cause selection ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_crypto_newsCrypto NewsA
Read-onlyIdempotent
Inspect

Get crypto news with AI sentiment scoring and coin tagging. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoFilter by coin symbol
hoursNoLookback window in hours (1-168)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (read-only, idempotent), but description adds critical behavioral context: explicit cost ($0.01) and AI processing details (sentiment scoring, coin tagging) that explain what the tool does beyond raw data retrieval.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-part structure: functional description followed by cost. Every element earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple read-only tool with rich annotations. Covers purpose, cost, and AI features. Absence of output schema means return format is unspecified, but this is acceptable given the low complexity (2 optional params) and clear functional scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear semantics for 'coin' and 'hours'. Description mentions 'coin tagging' which loosely relates to the coin parameter but adds no specific syntax, format constraints, or usage notes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb (Get) and resource (crypto news) with specific differentiators (AI sentiment scoring, coin tagging). However, it does not explicitly distinguish from sibling tool `get_news_sentiment`, which may cause confusion given the overlapping sentiment functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides cost information ($0.01) which serves as a usage constraint, but lacks explicit when-to-use guidance, prerequisites, or comparison against `get_news_sentiment` alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_decision_packDecision PackA
Read-onlyIdempotent
Inspect

Agent Decision Pack — price, conviction, regime, funding, derivatives, mempool, whales, ML signal, recommendation in one call. $0.15

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (BTC, ETH, SOL, TAO, KAS, XLM)BTC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds pricing information ($0.15) not present in annotations, which is essential for an agent's cost-benefit analysis. Clarifies this is a composite endpoint aggregating multiple data streams (conviction, mempool, whale data, etc.) which explains why it might be slower/more expensive than single-metric siblings. No contradictions with readOnlyHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every element earns its place: product name, complete component list, efficiency note, and pricing. Front-loaded structure with zero waste. Single sentence delivers maximum information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description compensates well by enumerating the nine specific data components returned (price, ML signal, recommendation, etc.). Cost disclosure is critical for completeness. Minor gap: could specify the format/structure of the 'recommendation' field.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (coin parameter includes symbol list). Description does not add parameter semantics, but baseline 3 is appropriate since the schema fully documents the single optional parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Lists specific components included (price, conviction, regime, funding, mempool, whales, ML signal, recommendation) which clearly distinguishes this aggregate tool from individual siblings like get_conviction or get_whale_flow. Lacks a specific verb ('Returns' or 'Fetches') but effectively communicates it is a bundled data product via the component enumeration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Includes critical cost constraint '$0.15' and implies efficiency benefit via 'in one call', hinting at use for comprehensive analysis versus individual calls. However, lacks explicit guidance on when to choose this over granular alternatives (e.g., 'use when you need aggregated decision signals').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_decisionsTrading DecisionsB
Read-onlyIdempotent
Inspect

Get composite AI trading decisions — ML + regime + conviction + chart vision. $0.08

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoFilter by coin
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It successfully discloses cost ($0.08) and internal data sources (ML, regime, chart vision), but fails to clarify safety profile (read-only vs destructive), rate limits, or what the returned decisions look like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise two-sentence structure with zero waste. Front-loaded with action ('Get'), immediately qualified by composition details after the em-dash, and ends with critical cost information. Every token earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter tool but incomplete given the implied complexity of AI trading decisions. Missing output description and safety context (especially important given the explicit cost disclosure suggests this is a premium/gated operation). The cost warning adds necessary value, but more behavioral detail is needed for a complete picture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters. Per scoring guidelines, 0 params equals baseline 4. The description appropriately requires no parameter clarification.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Get) and resource (composite AI trading decisions) and distinguishes from siblings by listing the four specific signal inputs (ML, regime, conviction, chart vision). However, 'decisions' assumes domain knowledge about trading signals without clarifying the output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit when-to-use guidance or comparison to siblings like 'analyze_decisions' or 'generate_decision_pack'. While the component list implies use-cases requiring multi-factor signals, it doesn't clarify why one would 'get' versus 'analyze' or 'generate' decisions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_defi_yieldsDeFi YieldsA
Read-onlyIdempotent
Inspect

Get DeFi yields across 200 pools — Aave, Compound, Lido, Pendle. Filter by chain, protocol, min APY. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter by chain: Ethereum, Base, Arbitrum
min_apyNoMinimum APY filter
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent hints. Description adds valuable behavioral context beyond annotations: cost per call ('$0.005'), specific data sources/coverage (200 pools, named protocols), and filtering capabilities. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient structure: action verb → scope/examples → filters → cost. Every element earns its place. Front-loaded with the core operation. Specific protocols listed parenthetically with em-dash. No waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a read-only data retrieval tool with 2 parameters. Covers data domain, cost, and filter capabilities. With annotations handling safety profile and schema covering inputs, only minor gaps remain (e.g., return format details, which are acceptable without output schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description mentions 'Filter by chain, protocol, min APY' but the schema only defines 'chain' and 'min_apy' parameters. Mentioning a 'protocol' filter that doesn't exist in the schema creates confusion and potential hallucination risk, reducing the score despite mentioning valid filters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') and resource ('DeFi yields') with scope ('200 pools'). Lists specific protocols (Aave, Compound, Lido, Pendle) that distinguish it from generic financial siblings like get_prices or get_signals. However, lacks explicit differentiation statement comparing to similar tools like get_strategies.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context by listing available filters ('Filter by chain, protocol, min APY'), suggesting when to use it (when seeking yield data with specific constraints). However, lacks explicit 'when not to use' guidance or named alternatives for different yield-seeking scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_eth_beaconETH Beacon ChainA
Read-onlyIdempotent
Inspect

Get Ethereum beacon chain status from own Nimbus full node — slot, epoch, finality gap, peers. $0.002

ParametersJSON Schema
NameRequiredDescriptionDefault
detailNoInclude validator stats
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only/idempotent safety. Description adds valuable behavioral context beyond annotations: data source specificity ('own Nimbus full node'), cost transparency ('$0.002'), and return payload enumeration (slot, epoch, finality gap, peers).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded verb, specific resource identification, source clarification, return fields, and cost—every element earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description enumerates return fields (including conditional validator stats). Combined with readOnly annotations and cost disclosure, this provides sufficient context for invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the 'detail' parameter is fully documented as 'Include validator stats'). Description does not mention the parameter, but baseline 3 applies when schema carries full descriptive load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action 'Get' with clear resource 'Ethereum beacon chain status' and distinguishes from siblings by specifying 'own Nimbus full node' (local infrastructure vs external APIs) and enumerating returned fields (slot, epoch, finality gap, peers).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like get_blocks, get_network_stats, or get_btc_onchain. Only describes what it does, not selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_exchange_flowsExchange FlowsB
Read-onlyIdempotent
Inspect

Get exchange inflow/outflow data — net flows, exchange reserves, accumulation signals. $0.03

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoLookback hours (1-168)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses cost ('$0.03') and data types returned, but omits rate limits, data freshness, supported exchanges, or authentication requirements. No mention of whether this is real-time or historical data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact single-line description with metric list after em-dash. '$0.03' pricing is valuable information preserved efficiently. No redundant filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no parameters and no output schema, tool relies heavily on description. While it identifies the data domain, it lacks critical context: no time range specification, no exchange coverage list, no output format description, and no sample use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters in schema, which establishes baseline of 4. Description appropriately does not fabricate parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with resource 'exchange inflow/outflow data' and enumerates specific metrics (net flows, reserves, accumulation signals). However, it does not differentiate from sibling tool 'get_whale_flow' which likely tracks similar movement data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus alternatives like 'get_whale_flow' or 'get_onchain_btc'. No mention of prerequisites, time range limitations, or specific use cases for exchange flow analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fear_greedFear & Greed IndexB
Read-onlyIdempotent
Inspect

Get Crypto Fear & Greed Index (0-100) with 7-day history and z-score. FREE.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoHistory days (1-30)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds 'FREE' (cost disclosure) and specifies return data features (z-score, 0-100 range) not covered by annotations. However, with openWorldHint=true present, it fails to disclose external API dependencies, rate limits, or data freshness/real-time constraints that would be critical for a web-facing data tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded with the core action. While efficient, the trailing 'FREE.' adds limited semantic value for tool selection and could be replaced with more relevant behavioral constraints or usage guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter read operation with rich annotations and an output schema present, the description adequately covers what data is returned (index value, history, z-score). It appropriately relies on the output schema for structural details rather than describing them verbally.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'days' parameter, establishing a baseline of 3. The description mentions '7-day history' implying the default value, but adds no semantic elaboration on the parameter's purpose beyond the schema's 'History days (1-30)' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies a specific resource (Crypto Fear & Greed Index) and key output attributes (0-100 scale, 7-day history, z-score). However, it uses the generic verb 'Get' rather than 'Retrieve' or 'Fetch', and does not explicitly differentiate this sentiment metric from sibling market data tools like get_market_score or get_signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the numerous sibling data tools (e.g., get_crypto_news, get_market_score, get_sentiment). It lacks prerequisites, use-case scenarios, or exclusions that would help an agent select this over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_funding_historyFunding Rate HistoryA
Read-onlyIdempotent
Inspect

Funding rate history for any perpetual-futures pair — up to 2 years. Per bucket: rate, predicted_rate, open_interest, annualized_pct, sample count. Plus stats (min/max/avg/current). For carry-trade research and derivatives backtesting. $0.05

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesCoin symbol (BTC, ETH, SOL)
daysNoLook-back days (1-730)
intervalNoBucket size: 1h, 4h, 8h, 1d8h
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (readOnlyHint, openWorldHint, idempotentHint, destructiveHint), but the description adds valuable context beyond that: it specifies the time range ('up to 2 years'), includes cost information ('$0.05'), and hints at data granularity ('Per bucket'). This enhances transparency without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, with three sentences that efficiently convey purpose, data details, usage context, and cost. Every sentence adds value without redundancy, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (historical data retrieval with multiple parameters), annotations provide good behavioral coverage, but there is no output schema. The description compensates by detailing the output structure ('Per bucket: rate, predicted_rate... Plus stats') and use cases, making it fairly complete. A slight gap remains in not explicitly describing the return format or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters (coin, days, interval). The description does not add any parameter-specific details beyond what the schema provides, such as explaining how parameters affect the output. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Funding rate history for any perpetual-futures pair') and resource ('up to 2 years'), with explicit differentiation from siblings by detailing bucket contents (rate, predicted_rate, etc.) and stats (min/max/avg/current). It goes beyond the tool name/title by specifying the data scope and structure.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('For carry-trade research and derivatives backtesting'), which helps guide usage. However, it does not explicitly mention when not to use it or name alternative tools among siblings (e.g., get_funding_rates might be related), so it lacks explicit exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_funding_ratesFunding RatesA
Read-onlyIdempotent
Inspect

Get live perpetual futures funding rates for 34 pairs on Binance. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoFilter by symbol (e.g. BTCUSDT)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context beyond annotations: specifies 'Binance' as the source (aligns with openWorldHint), '34 pairs' as scope limitation, and 'live' as freshness. These details help the agent understand data provenance and coverage limits not captured in structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely brief and front-loaded with key information. However, the '$0.005' fragment appears to be an example value or cost indicator without labeling, creating confusing noise rather than signal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-only data retrieval tool. Specifies the domain (perpetual futures), source (Binance), and dataset size (34 pairs). No output schema exists, but the description sufficiently indicates the return type (funding rates).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (symbol parameter fully described with example), the description correctly relies on the schema. Adds no parameter-specific guidance but none is needed at this coverage level.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the resource (perpetual futures funding rates) and scope (34 pairs on Binance) with specific action verb. However, the trailing '$0.005' is ambiguous and unexplained, slightly muddying the purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through specificity (funding rates vs prices), but provides no explicit guidance on when to prefer this over sibling tools like get_prices or get_ohlcv. No prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_gas_pricesGas PricesA
Read-onlyIdempotent
Inspect

Get real-time gas prices — Ethereum gwei (slow/standard/fast) and Bitcoin sat/vB fee estimates. $0.003

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: ethereum, bitcoin, or bothboth
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare this is a safe, idempotent read operation. The description adds valuable behavioral context not in annotations: the '$0.003' suggests API cost/pricing, and it details the specific return value structure (tiered gwei estimates vs sat/vB), helping the agent predict output format without an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise and front-loaded. The single sentence efficiently packs the action, the two supported chains with their respective data formats, and cost information. No redundant words or repetitive restatements of the title.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter data retrieval tool with no output schema, the description is sufficiently complete. It explains what data comes back (compensating for missing output schema) and covers the essential scope. Minor gap: could clarify what '$0.003' represents (API cost vs example fee).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage (the 'chain' parameter is fully documented with allowed values). The description does not explicitly discuss the parameter, but given the complete schema coverage, the baseline score of 3 is appropriate—the schema carries the semantic load adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('gas prices'), clearly defining scope. It distinguishes from siblings like 'get_prices' by specifying this returns fee estimates (not asset prices) and details the exact data structures returned: Ethereum gwei with speed tiers (slow/standard/fast) and Bitcoin sat/vB.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context about what chains and formats are supported, implying usage for transaction fee estimation on Ethereum or Bitcoin. However, lacks explicit guidance on when to use this versus the similar sibling 'get_mempool_fees' or why one might choose specific chain options over the default 'both'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_intel_bundleIntelligence BundleA
Read-onlyIdempotent
Inspect

Get complete intelligence bundle — ML signal, regime, features, score, funding, orderbook, news. $0.18

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (BTC, ETH, SOL, TAO, KAS, XLM)BTC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial behavioral context beyond annotations: explicit cost disclosure ('$0.18') and detailed breakdown of bundled data sources (ML signal, regime, etc.) that reveals what the operation returns. Annotations cover safety profile (readOnly, idempotent, non-destructive); description complements this with commercial and content transparency. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: first clause establishes purpose ('Get complete intelligence bundle'), em-dash enumerates contents, and cost is stated. Every token serves a function—no redundancy with schema or annotations. Front-loaded with the core action and scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an aggregation tool of moderate complexity (bundling 7+ data streams), the description adequately covers what is returned. Cost disclosure is essential for a paid endpoint. Absent an output schema, listing the constituent data sources provides sufficient proxy for return value documentation, though mentioning data freshness or latency would strengthen completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (coin parameter fully documented with type, default, and enumerated examples), the schema carries the full semantic load. The description adds no parameter-specific clarifications, which is appropriate when the schema is self-explanatory, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'intelligence bundle' with explicit scope ('complete'). The enumerated components (ML signal, regime, features, score, funding, orderbook, news) clearly distinguish it from sibling tools that provide individual metrics (e.g., get_ml_signal, get_orderbook, get_crypto_news), positioning this as an aggregated data package.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through the word 'complete' and lists contents, implicitly suggesting use when comprehensive multi-factor data is needed. Critically includes cost '$0.18' which informs usage economics, but lacks explicit guidance on when to use this paid bundle versus free individual tool calls (e.g., 'use when you need all components vs. individual calls').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_intel_coinIntelligence Feed Single CoinB
Read-onlyIdempotent
Inspect

Single coin deep intelligence — ML signal, all features, plus chain data (BTC mempool, ETH beacon, network stats). $0.003

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (e.g. BTC, ETH, SOL)BTC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description specifies the cost ('$0.003'), which is crucial usage information not captured in annotations. It also hints at the comprehensive nature of the data returned ('deep intelligence'), though it doesn't detail rate limits or specific authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in a single sentence that packs key information: purpose, scope, and cost. Every element serves a purpose with minimal waste. It could be slightly improved by front-loading the core function more explicitly, but it's generally well-constructed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (deep intelligence with multiple data types) and the absence of an output schema, the description provides a reasonable overview but lacks details about the return format, data structure, or specific intelligence components. The annotations cover safety aspects well, but for a comprehensive intelligence tool, more information about what 'deep intelligence' entails would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single 'coin' parameter. The description doesn't add any parameter-specific information beyond what's in the schema. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even without parameter details in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'Single coin deep intelligence' with specific components like ML signal, all features, and chain data, which distinguishes it from simple price or basic data tools. However, it doesn't explicitly differentiate from similar intelligence tools like 'get_intel_bundle' or 'get_intel_snapshot' among its siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose it over other intelligence tools like 'get_intel_bundle' or when to use simpler data tools like 'get_prices'. The cost information ('$0.003') hints at usage considerations but doesn't provide explicit when/when-not instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_intel_historyIntelligence Feed HistoryA
Read-onlyIdempotent
Inspect

Historical intelligence — OHLCV, funding rates, OI, signals with configurable lookback. 730d available. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (e.g. BTC)BTC
hoursNoLookback hours (1-720)
includeNoData types: ohlcv,funding,oi,signalsohlcv,funding
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate it's read-only, non-destructive, idempotent, and open-world, covering safety and reliability. The description adds valuable context beyond this: it specifies a cost ('$0.01'), a maximum lookback ('730d available'), and that data types are configurable ('OHLCV, funding rates, OI, signals with configurable lookback'), which helps the agent understand operational constraints and scope not captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, packing key information into a single sentence: purpose, data types, lookback, availability, and cost. Every element earns its place with no wasted words, making it efficient and easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema) and rich annotations, the description is mostly complete. It covers purpose, data types, lookback, availability, and cost. However, it lacks details on output format or structure, which could be helpful since there's no output schema, leaving some ambiguity about what the agent should expect in return.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for each parameter (coin, hours, include). The description adds some semantic context by mentioning 'configurable lookback' and listing data types, but it does not provide additional details beyond what the schema already documents, such as format examples or interactions between parameters. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Historical intelligence') and resources ('OHLCV, funding rates, OI, signals'), distinguishing it from siblings like get_ohlcv (only OHLCV) or get_funding_rates (only funding rates). It specifies the scope ('configurable lookback', '730d available') and cost ('$0.01'), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage ('Historical intelligence', 'configurable lookback', '730d available'), implying it's for retrieving historical data with flexible timeframes. However, it does not explicitly state when not to use it or name alternatives among siblings, such as get_intel_snapshot or get_intel_coin, which might offer different data scopes or real-time information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_intel_snapshotIntelligence Feed SnapshotA
Read-onlyIdempotent
Inspect

Multi-coin intelligence snapshot — ML signals, funding, OI, regime, volatility for up to 34 coins. Sub-50ms. $0.008

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoComma-separated coins (e.g. BTC,ETH,SOL). Omit for all 34
detailNoDetail level: medium (15 fields) or full (60+ fields)medium
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover safety (readOnly, non-destructive, idempotent) and scope (openWorld). The description adds valuable behavioral context: performance ('Sub-50ms') and cost ('$0.008'), which are not captured in annotations. It doesn't contradict annotations, and the added details enhance transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: it packs purpose, scope, performance, and cost into a single, efficient sentence with zero wasted words. Every element (data types, coin limit, speed, cost) earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema), the description is quite complete. It covers purpose, scope, performance, and cost. With annotations providing safety/scope info and schema covering parameters, the main gap is lack of output details (e.g., what the snapshot structure looks like), but this is partially mitigated by the 'detail' parameter description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain 'coins' or 'detail' further). Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it provides a 'multi-coin intelligence snapshot' with specific data types (ML signals, funding, OI, regime, volatility) for up to 34 coins. It distinguishes itself from siblings like 'get_intel_coin' (single-coin) and 'get_intel_bundle' (different scope) by specifying its multi-coin snapshot nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for a comprehensive snapshot of multiple coins with specific intelligence metrics. It doesn't explicitly state when not to use it or name alternatives, but the context of sibling tools (e.g., 'get_intel_coin' for single-coin detail) implies differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_kaspa_networkKaspa NetworkB
Read-onlyIdempotent
Inspect

Get Kaspa BlockDAG network stats from own Rusty Kaspa node — blocks, hashrate, mempool, peers. $0.002

ParametersJSON Schema
NameRequiredDescriptionDefault
detailNoInclude DAG metrics
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable context by specifying the data source ('Rusty Kaspa node') and hints at cost/rate limits ('$0.002'), but fails to disclose safety characteristics (read-only status), authentication requirements, or error behaviors expected for a node-querying tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with the verb leading. It is appropriately brief, though the trailing '$0.002' lacks context (presumably cost) which slightly hinders immediate comprehension without earning its place clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description adequately scopes the return value by listing metric categories. However, for a data retrieval tool, it should explicitly state it returns current network statistics and clarify the return format, especially given significant sibling overlap that could confuse agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, triggering the baseline score of 4. The description correctly omits parameter discussion since none exist, though it could have clarified that no filtering is possible (consistent with listing specific static metrics).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Get') and specifies the resource ('Kaspa BlockDAG network stats'), listing exact metrics returned (blocks, hashrate, mempool, peers). It distinguishes the data source ('own Rusty Kaspa node'), implicitly differentiating it from sibling query tools, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus siblings like 'query_kaspa_network' or 'inspect_mempool', nor does it mention prerequisites or conditions for use. Agents must infer applicability from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_liquidationsLiquidation DataB
Read-onlyIdempotent
Inspect

Get real-time crypto liquidation data — long/short volume, cascade detection, biggest events. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoLookback hours (1-24)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. 'Real-time' indicates data freshness, but missing: safety profile (read-only assumed but unstated), return format/schema, rate limits, or explanation of '$0.005' (possible cost/latency hint). Inadequate behavioral disclosure for a zero-annotation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded structure with primary action first, followed by em-dash elaboration of specific data types. Single-sentence efficiency is marred only by the unexplained '$0.005' trailing fragment which adds noise without context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a parameter-less data retrieval tool: covers resource and data categories. However, lacks return value description (no output schema exists) and leaves '$0.005' unexplained. Sufficient but minimal coverage given no annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters per schema (rule baseline 4). Description correctly implies no filtering is available (returns all liquidation data categories by default), which aligns with the empty input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('crypto liquidation data') with specific differentiators ('cascade detection', 'biggest events') that imply forced liquidation monitoring vs. sibling generic data tools. The cryptic '$0.005' suffix slightly obscures clarity but doesn't invalidate the specific purpose statement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when/when-not guidance or alternative suggestions despite numerous similar siblings (get_long_short, get_funding, get_open_interest). 'Real-time' provides temporal context but insufficient differentiation guidance for the 40+ sibling market data tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_long_shortLong/Short RatioA
Read-onlyIdempotent
Inspect

Get Binance long/short ratio for top perpetual pairs — trader sentiment indicator. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoFilter by symbol (e.g. BTCUSDT)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It identifies the data source (Binance) but omits critical behavioral details: data frequency, what 'top' means (by volume/OI?), response format (ratios per pair?), or the cryptic '$0.005' suffix unexplained.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded and efficient except for the trailing '$0.005' which appears to be metadata (cost?) or error that confuses rather than clarifies. Otherwise two clear sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Missing output schema means return structure is undocumented. While the description identifies the resource, it should ideally sketch the response format (e.g., ratios per trading pair, aggregated value) to be complete for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters per schema (empty object), warranting baseline score per rubric. Schema coverage is trivially 100%. No parameter documentation burden exists.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'Binance long/short ratio' + scope 'top perpetual pairs'. Clearly distinguishes from siblings like get_funding or get_open_interest by specifying the exact sentiment metric.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by labeling it a 'trader sentiment indicator', but lacks explicit guidance on when to select this versus other market data tools (e.g., get_funding_arbitrage, get_liquidations) for analyzing market positioning.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_macro_corrMacro CorrelationsA
Read-onlyIdempotent
Inspect

Get BTC-Macro correlations — rolling Pearson vs DXY, VIX, Treasury, Oil, Gold with regime signal. $0.03

ParametersJSON Schema
NameRequiredDescriptionDefault
windowNoCorrelation window in days (7 or 30)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent behavioral additions beyond annotations: discloses cost ('$0.03'), calculation method (Pearson), specific assets covered, and output components ('regime signal'). Complements annotations (readOnly/idempotent) without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence packs methodology, asset list, feature signal, and pricing. Zero waste, front-loaded with core purpose, information-dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without output schema, description adequately covers what's returned (correlations vs listed assets + regime signal). Safety profile covered by annotations. Could clarify data format/structure but content description is sufficient for this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline applies. Description mentions 'rolling' which conceptually relates to the 'window' parameter but doesn't explicitly describe parameter semantics or validation rules (7 or 30 days constraint).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies exact operation ('Get'), asset pair ('BTC-Macro'), methodology ('rolling Pearson'), and specific macro assets (DXY, VIX, Treasury, Oil, Gold). Clearly distinguishes from generic 'get_correlations' sibling by specifying BTC focus and asset list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings like 'get_correlations' or 'get_macro_data'. While it specifies BTC-macro scope, it doesn't explain selection criteria or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_macro_dataMacro IndicatorsA
Read-onlyIdempotent
Inspect

Get real-time macro indicators — DXY, VIX, Gold, Oil, Treasury Yields. Crypto-macro correlation data. FREE.

ParametersJSON Schema
NameRequiredDescriptionDefault
indicatorNoFilter: DXY, VIX, GC, CL, TNX

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only, idempotent, non-destructive nature. Description adds valuable behavioral context: 'real-time' freshness indicator, 'FREE' cost/rate-limit signal, and clarifies the specific macro indicators available. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences. Front-loaded with primary asset types. 'FREE' provides rate-limit context though slightly tangential. No redundant repetition of parameter details already in schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a single-parameter filter tool with output schema. Description covers the indicator universe completely. No need to detail return values since output schema exists and parameters are minimal.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with filter codes (GC, CL, TNX), but description adds semantic value by mapping these codes to human-readable names (Gold, Oil, Treasury Yields). This translation helps the agent understand what the indicator parameter values represent beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with resource 'macro indicators' and explicitly lists covered assets (DXY, VIX, Gold, Oil, Treasury Yields). Clearly distinguishes from crypto-specific siblings like get_btc_onchain or get_prices by emphasizing traditional macroeconomic indicators and crypto-macro correlation data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Scope is implied by listing specific indicators (DXY, VIX, etc.), but lacks explicit 'when to use' guidance regarding sibling tool get_macro_corr or whether to use this versus get_correlations when seeking macro-crypto relationships. No exclusion criteria or prerequisites mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_regimeMarket RegimeA
Read-onlyIdempotent
Inspect

Get market regime via HMM — accumulation, markup, distribution, markdown, high_volatility. FREE.

ParametersJSON Schema
NameRequiredDescriptionDefault
include_featuresNoInclude regime feature values

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly, idempotent, non-destructive), so the description adds valuable behavioral context by enumerating the specific five regime classifications (accumulation through high_volatility) and noting the analytical method (HMM). It also includes 'FREE' indicating cost structure. It does not disclose data source frequency or lookahead bias risks, but covers the essential behavioral outputs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact at two phrases plus a tag. 'Get market regime via HMM' front-loads purpose. The dash-separated list efficiently communicates the categorical outputs. 'FREE' earns its place as cost information not present in annotations. Minor deduction for undefined acronym 'HMM' which could confuse non-technical agents.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a simple read-only tool with one optional parameter, full annotations, and an output schema (per context signals), the description is sufficiently complete. It explains what the tool does, how it does it (HMM), what values it returns, and cost. It appropriately delegates return value details to the output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one well-described boolean parameter (include_features). The description neither repeats nor expands upon this parameter, which is appropriate since the schema fully documents it. This meets the baseline score of 3 for high-coverage schemas where the description doesn't need to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb-resource pair ('Get market regime') and identifies the methodology ('HMM' - Hidden Markov Model) and specific regime outputs (accumulation, markup, distribution, markdown, high_volatility). This distinguishes it from generic siblings like get_signals or get_market_score. However, it does not explicitly differentiate why one would choose this over get_ml_signal or get_anomalies, and 'HMM' assumes technical familiarity without explanation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it lists the specific regimes detected, it does not state conditions like 'use this for trend-phase analysis' or 'for volatility regimes, use get_anomalies instead.' The 'FREE' mention hints at cost but not selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_scoreMarket Health ScoreB
Read-onlyIdempotent
Inspect

Get composite market health score (0-100) from 7 weighted components. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
detailNoInclude component breakdown
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover readOnly/idempotent/destructive hints. Description adds valuable behavioral context: the score is composite (7 weighted components) and bounded (0-100). This explains the tool's aggregation logic beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with essential information in the first sentence. However, the '$0.005' fragment is unexplained (possibly cost metadata) and reduces clarity without adding tool selection value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only getter with no output schema, the description adequately explains the return value's nature (composite score of 7 components, 0-100 scale). Combined with strong annotations, this provides sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'detail' parameter fully documented. Description implies component breakdown exists (relating to the parameter) but doesn't explicitly describe parameter usage, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action 'Get' and resource 'composite market health score' with clear scope (0-100 scale, 7 weighted components). However, it lacks explicit differentiation from siblings like get_fear_greed or get_signals, and the trailing '$0.005' is cryptic without context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus alternatives like get_signals, get_ml_signal, or get_anomalies. No prerequisites, conditions, or explicit alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_mempoolMempool StatusA
Read-onlyIdempotent
Inspect

Get BTC mempool status from own node — TX count, size, fee estimates, pressure level. $0.03

ParametersJSON Schema
NameRequiredDescriptionDefault
detailNoInclude fee rate histogram
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable operational context beyond annotations: '$0.03' indicates cost/rate-limiting, and 'from own node' clarifies data provenance ( live node vs. cached/aggregated). Annotations already cover safety (readOnly/idempotent) so description complements them well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Exemplary brevity: single sentence front-loaded with action and resource, em-dash cleanly separates secondary metrics, no waste. The trailing '$0.03', while telegraphic, is information-dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-only tool with one optional parameter. Covers return value categories, cost, and source. Minor gap: could briefly note relationship to get_mempool_fees to prevent selection errors.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for the single 'detail' parameter. Description doesn't explicitly explain the parameter's effect (fee rate histogram), but lists the base output fields which implicitly defines the default behavior. Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Get' + resource 'BTC mempool status', distinguishes from sibling get_mempool_fees by listing comprehensive metrics (TX count, size, fee estimates, pressure level) and noting 'from own node' as the data source.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicit usage guidance through the listed output fields (suggests use when needing full status vs. just fees), but lacks explicit when-to-use guidance or comparison with get_mempool_fees sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_mempool_feesMempool DistributionB
Read-onlyIdempotent
Inspect

Get mempool fee distribution — fee rate buckets, vsize breakdown, priority analysis. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoHistory hours (1-24)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only, idempotent, and non-destructive behavior. The description adds value by disclosing what specific data structures are returned (fee rate buckets, vsize breakdown, priority analysis). However, the trailing '$0.01' is unexplained (possibly pricing) and there is no mention of rate limits or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficient, using a single sentence with an em-dash for detail. However, the unexplained '$0.01' at the end appears to be noise or unexplained metadata (possibly cost), which slightly undermines the clarity and earns its place ambiguously.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one optional parameter and good annotations, the description is reasonably complete. It hints at the return structure (fee buckets, analysis) despite no formal output schema being present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description does not mention the 'hours' parameter, but the schema fully documents it with range constraints (1-24) and default value (4), so no additional description compensation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('Get') and resource ('mempool fee distribution'), and distinguishes this from the sibling 'get_mempool' by specifying it returns fee rate buckets, vsize breakdown, and priority analysis. However, it does not explicitly contrast when to use this versus the generic mempool tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives (like 'get_mempool'), no mention of prerequisites, and no exclusion criteria. The description only states what the tool returns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ml_featuresML FeaturesA
Read-onlyIdempotent
Inspect

Get ML feature vector — 86 technical + on-chain + sentiment features used by the ML ensemble. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (BTC, ETH, SOL)BTC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent addition of cost ('$0.01') not present in annotations. Describes the feature vector composition (86 features, three categories) which explains what the agent receives beyond the generic 'readOnly' annotation. Does not contradict the idempotent/readOnly hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient single sentence front-loaded with action verb. Every element earns its place: action (Get), resource (ML feature vector), content specifics (86 features, types), and cost ($0.01).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter read tool. Compensates for missing output schema by specifying the return contains 86 categorized features. Cost disclosure is critical contextual info. Could improve by mentioning return format (JSON object?).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, documenting the 'coin' parameter (string, default BTC, accepted values). The description mentions no parameter details, but with complete schema coverage, baseline 3 is appropriate as no additional semantic clarification is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb+resource ('Get ML feature vector') and distinguishes scope with '86 technical + on-chain + sentiment features used by the ML ensemble'. The '$0.01' cost disclosure also helps identify this as a premium data endpoint distinct from free analytics siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit when/when-not guidance or comparison to siblings like 'get_ml_signal' (which likely returns predictions based on these features). The phrase 'used by the ML ensemble' offers minimal implied context but no actionable selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ml_signalML PredictionB
Read-onlyIdempotent
Inspect

Get ML ensemble signal (5 models) — direction, probability, confidence. $0.05

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (BTC, ETH, SOL, TAO, KAS, XLM)BTC
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full transparency burden. It successfully discloses the cost ($0.05) and architectural detail (5-model ensemble), which is valuable. However, it omits whether the operation is cached, rate-limited, or if the signal is real-time vs. batched.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely dense and efficient. Every token conveys critical information: the operation (Get), resource (ML ensemble signal), implementation detail (5 models), output structure, and cost. No filler or redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool without output schema, covering the core value proposition and cost model. However, it leaves the sole parameter undocumented and provides no guidance on the broader tool ecosystem, which would help an agent navigate the 40+ available trading analysis tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage (the 'coin' property lacks any description), and the description text adds no semantic explanation for this parameter. While the default value 'BTC' provides a contextual hint, the agent receives no guidance on valid formats, accepted values, or whether this expects a ticker symbol vs. full name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves an ML ensemble signal using 5 models and specifies the output components (direction, probability, confidence). However, it fails to differentiate from the sibling tool `predict_ml_signal` or clarify when to use this 'get' variant versus other ML-related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus alternatives like `predict_ml_signal` or `scan_trading_signals`. There are no explicit prerequisites, exclusions, or use-case recommendations despite having numerous sibling tools with overlapping functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_narrativesCrypto NarrativesB
Read-onlyIdempotent
Inspect

Get active crypto narratives — trending themes, sector rotation, momentum scores. $0.03

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax narratives to return
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses what conceptual data is returned (themes, sector rotation, momentum scores), providing some behavioral context. However, it lacks operational details such as data freshness/latency, how 'active' is defined, rate limits, or authentication requirements for this apparently priced endpoint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence structure is appropriately front-loaded, but the appended '$0.03' fragments the description without adding functional value. This appears to be implementation-specific metadata that should reside in annotations or server configuration rather than the user-facing description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description lists return components which partially compensates. However, it omits scope limitations (number of narratives returned, time windows) and fails to clarify the significance of the '$0.03' element or any entitlements required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool accepts zero parameters; per rubric guidelines, 0 parameters merits a baseline score of 4. The description does not need to compensate for missing parameter documentation, though it correctly implies no user input is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves 'active crypto narratives' with specific components (trending themes, sector rotation, momentum scores), distinguishing it from raw data siblings like get_ohlcv or get_news. However, the trailing '$0.03' appears to be pricing metadata that leaked into the description, slightly muddying the purpose without adding functional clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like get_news, get_anomalies, or get_market_score. With numerous sibling data tools available, the agent lacks criteria to determine when thematic narrative analysis is preferred over raw market data or anomaly detection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_network_statsNetwork StatsA
Read-onlyIdempotent
Inspect

Get blockchain network stats — hashrate, blocks, peers, sync status. Coins: TAO, KAS, ALGO, XLM, QUBIC. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoBlockchain network: TAO, KAS, ALGO, XLM, QUBICTAO
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only/idempotent safety. Description adds valuable behavioral context not present in structured data: the specific metrics returned (hashrate, blocks, peers, sync status), cost disclosure ('$0.01'), and supported networks. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient three-sentence structure: purpose with em-dash specifics, scope enumeration, and cost. Every element earns its place—metrics define 'network stats', coin list defines scope, and price signals rate limits or cost constraints. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a single-parameter read operation. The description adequately covers functionality, cost, and return value semantics (via listed metrics) despite the absence of an output schema. No critical gaps given the tool's simplicity and comprehensive annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing baseline adequacy. Description reinforces the coin parameter by repeating the valid values (TAO, KAS, ALGO, XLM, QUBIC) but does not add syntax details, format constraints, or usage nuances beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' with clear resource 'blockchain network stats' and concrete metrics (hashrate, blocks, peers, sync status). The enumerated coin list (TAO, KAS, ALGO, XLM, QUBIC) effectively distinguishes this multi-coin tool from coin-specific siblings like get_kaspa_network, get_bittensor, and get_subtensor.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context by enumerating supported coins, which implicitly defines when to use this tool (for those specific networks) versus coin-specific siblings. However, lacks explicit 'when not to use' guidance or direct comparison to overlapping tools like get_kaspa_network that might offer deeper single-network data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_news_sentimentAI News AnalysisA
Read-onlyIdempotent
Inspect

Get AI-analyzed crypto news — sentiment scores, impact levels, macro event classification. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoFilter by coin symbol
hoursNoLookback window in hours (1-168)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only/idempotent/destructive=false properties. The description adds critical cost information ('$0.01') not present in annotations and specifies the three analysis dimensions returned. However, it omits rate limits, pagination behavior, or return structure details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every element earns its place: action/resource front-loaded, em-dash separates feature list efficiently, cost appended as critical metadata. Zero redundancy or filler words in a single sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description adequately identifies what data comes back (sentiment, impact, macro classification) but fails to describe the structure, nesting, or cardinality of the response. Given only 2 simple parameters, this is minimally sufficient but leaves integration gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both 'coin' and 'hours' have complete descriptions in the schema). The tool description implies the temporal scope via 'AI-analyzed' but adds no additional parameter semantics, which is acceptable when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource pair ('Get AI-analyzed crypto news') with specific distinguishing features (sentiment scores, impact levels, macro event classification). The AI analysis focus helps differentiate from sibling get_crypto_news, though it doesn't explicitly state the distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to select this tool versus alternatives like get_crypto_news (raw news). No prerequisites or filtering advice provided despite having a generic 'coin' parameter that defaults to empty.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ohlcvOHLCV + IndicatorsC
Read-onlyIdempotent
Inspect

Get OHLCV candle data with 86 technical indicators — RSI, MACD, Bollinger, volume profile. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (BTC, ETH, SOL)BTC
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It provides valuable cost context ('$0.005') and lists included indicators, but lacks critical behavioral details: supported timeframes/intervals, data source/exchanges, historical range limits, and return format structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact and front-loaded with the action. However, the '$0.005' pricing information at the end feels disconnected from the technical context and would benefit from explicit labeling as cost.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex financial data tool returning 86 indicators, the description is incomplete. It omits exchange coverage, available intervals/timeframes, pagination behavior, and data freshness—critical context for cryptocurrency OHLCV data given volatility and exchange fragmentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for the undocumented 'coin' parameter. It completely fails to do so, providing no information about expected format (ticker vs name), supported values, or semantics despite the default 'BTC' suggesting ticker symbols.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the verb (Get), resource (OHLCV candle data), and scope (86 technical indicators including RSI, MACD, Bollinger). However, it fails to distinguish from sibling `fetch_ohlcv_indicators`, which appears to offer similar functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like `fetch_ohlcv_indicators` or other analysis tools. No prerequisites, time recommendations, or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_oi_historyOpen Interest HistoryA
Read-onlyIdempotent
Inspect

Open Interest time series for any tracked coin across Binance + Bybit + OKX. Hourly buckets up to 60 days. Per-exchange breakdown + total + price. Stats: min/max/avg/current total USD and 30-day change_pct. $0.05

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesCoin symbol
daysNoLook-back days (1-60)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as a safe, read-only, idempotent operation. The description adds valuable behavioral context beyond annotations: it specifies the data sources (Binance, Bybit, OKX), time granularity (hourly buckets), statistical outputs (min/max/avg/current total USD, 30-day change_pct), and cost ($0.05). This significantly enhances understanding of what the tool actually returns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded and efficient: the first sentence covers purpose, scope, and sources; the second adds statistical details and cost. Every sentence earns its place with zero wasted words, making it easy for an agent to quickly understand the tool's value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations (readOnlyHint, openWorldHint, idempotentHint) and full schema coverage, the description provides excellent context about data sources, granularity, statistical outputs, and cost. The only gap is the lack of output schema, but the description compensates well by detailing what statistics are returned. For a read-only query tool, this is nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents both parameters (coin symbol and look-back days). The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Open Interest time series'), resources ('any tracked coin across Binance + Bybit + OKX'), and scope ('Hourly buckets up to 60 days'), distinguishing it from sibling tools like 'get_open_interest' which likely provides current data rather than historical time series.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool: for historical open interest data with hourly granularity across three exchanges. However, it doesn't explicitly state when not to use it or name alternatives like 'get_open_interest' for current data, which would be helpful for sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_onchain_fgOn-Chain Fear & GreedA
Read-onlyIdempotent
Inspect

Get proprietary On-Chain Fear & Greed Index (0-100) from own Bitcoin Core node — CDD, mempool pressure, hashrate trend, whale volume. FREE.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context beyond annotations: discloses cost ('FREE'), proprietary nature, infrastructure ('own Bitcoin Core node'), and constituent metrics. Annotations cover idempotency/safety, while description explains what actually gets aggregated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with verb and resource. Efficient single sentence with appositive em-dash listing components. 'FREE.' as final word fragment is slightly unconventional but acceptable terseness. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a zero-parameter tool with output schema. Description explains the index scale (0-100), methodology components, data source, and cost—sufficient context without needing to describe return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, establishing baseline 4 per rubric. No parameters to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Get' + resource 'On-Chain Fear & Greed Index' + data source 'own Bitcoin Core node'. Distinguishes from sibling 'get_fear_greed' via 'On-Chain' modifier and explicit methodology list (CDD, mempool pressure, hashrate trend, whale volume).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual clues through listed components (CDD, mempool pressure, etc.), signaling this measures blockchain fundamentals rather than sentiment, implicitly guiding selection over 'get_fear_greed'. Lacks explicit 'when to use vs alternatives' statement, preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_open_interestOpen InterestB
Read-onlyIdempotent
Inspect

Get aggregated open interest across exchanges — total OI, 24h change, coin breakdown. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoFilter by coin (BTC, ETH, SOL)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry full behavioral disclosure burden. It discloses aggregation scope ('across exchanges') and return data categories, but omits critical operational details: whether data is real-time or delayed, which exchanges are included, rate limits, or the meaning of '$0.01' (cost? precision?).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The first clause is appropriately front-loaded and concise, but the trailing '$0.01' fragment wastes space without explanation, creating confusion rather than value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description adequately compensates by listing the conceptual return values (total, 24h change, breakdown). For a simple data retrieval tool, this is sufficient, though explicit output structure would improve it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per calibration guidelines, zero-parameter tools receive a baseline score of 4. The description does not need to compensate for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('aggregated open interest across exchanges') and details the returned metrics ('total OI, 24h change, coin breakdown'). However, the trailing '$0.01' is cryptic noise that slightly undermines clarity, and stronger differentiation from siblings like get_liquidations or get_funding would help.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus the numerous sibling market data tools (get_funding, get_liquidations, get_exchange_flows, etc.). No 'when to use' or 'when not to use' conditions are specified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_optionsOptions AnalyticsA
Read-onlyIdempotent
Inspect

Get crypto options data — put/call ratio, implied volatility, max pain, skew. $0.03

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoFilter by coin (BTC, ETH)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description must carry the full behavioral burden. It adequately discloses what data is returned (the specific metrics listed), but provides no operational context such as rate limits, data freshness, required authentication, or the meaning of the cryptic '$0.03' suffix. It does not explicitly state whether this is a read-only operation, though 'Get' implies it.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief and front-loaded with the core action and resource. However, the trailing '$0.03' is structurally confusing—it appears to be cost metadata or noise without context, slightly detracting from an otherwise clean delivery.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description partially compensates by listing the specific metrics returned (put/call ratio, IV, etc.). However, for a data retrieval tool with no input parameters, it lacks context about supported underlying assets, timeframes, or exchanges, leaving gaps in the agent's understanding of the tool's scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters present in the input schema, the baseline score is 4 per the evaluation rules. The description correctly omits parameter discussion since none exist, and the schema coverage is trivially 100%.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('crypto options data') and explicitly lists options-specific metrics (put/call ratio, implied volatility, max pain, skew) that clearly distinguish it from sibling tools like get_prices or get_funding which handle spot markets or perp funding rather than options Greeks and sentiment data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given the extensive list of sibling data tools (get_prices, get_charts, get_funding), the absence of explicit selection criteria or prerequisites leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_orderbookOrderbook DepthC
Read-onlyIdempotent
Inspect

Get Binance L2 orderbook microstructure — bid/ask imbalance, spread, wall ratios. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (BTC, ETH, SOL, TAO, KAS, XLM)BTC
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It discloses the data source (Binance) and granularity (L2/microstructure), but omits critical behavioral details like rate limits, authentication requirements, data freshness, or error handling when invalid symbols are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded with the core purpose, but the dangling '$0.01' fragment creates confusion without context. Every sentence should earn its place, but the final element appears to be a pricing annotation that lacks explanatory value in this context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description partially compensates by listing specific output metrics (imbalance, spread, wall ratios). However, it leaves the single input parameter undocumented and provides no behavioral constraints, making it minimally viable but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate but fails entirely—it does not mention the 'coin' parameter, acceptable formats (BTC vs BTCUSDT), or clarify that it defaults to BTC. The cryptic trailing '$0.01' provides no clarity on whether this represents cost, minimum value, or spread.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the action (Get), resource (Binance L2 orderbook), and key metrics returned (bid/ask imbalance, spread, wall ratios). However, it fails to distinguish this from the sibling tool 'analyze_orderbook', leaving ambiguity about whether this returns raw data or processed analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus 'analyze_orderbook' or other market data siblings. There are no prerequisites, exclusions, or explicit use cases mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_portfolio_riskPortfolio RiskB
Read-onlyIdempotent
Inspect

Get portfolio risk analytics — VaR, CVaR, Sharpe ratio, max drawdown, correlation matrix. $0.10

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoComma-separated coins (e.g. BTC,ETH,SOL)BTC,ETH,SOL
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent so description carries full burden. Discloses cost ($0.10) and hints at output content via metric list, but fails to state read-only safety, response format, or whether portfolio context is implicit/requires prior selection.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: action phrase, enumerated metrics, cost. Every token earns its place with zero filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool lacking output schema, the description adequately identifies the function and cost, but leaves ambiguity regarding which portfolio is analyzed (session context? implicit?) and omits return format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present per schema, triggering baseline score of 4. No parameters to describe.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'portfolio risk analytics' with explicit metric enumeration (VaR, CVaR, Sharpe ratio, max drawdown, correlation matrix). Lists what distinguishes this from siblings like get_correlation (single metric vs. comprehensive suite), though lacks explicit comparative language.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings like get_correlation or get_chart. The metric list implies scope but provides no 'when-to-use' or prerequisite guidance (e.g., portfolio selection method).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricesCrypto PricesA
Read-onlyIdempotent
Inspect

Get real-time crypto prices for 116 coins — BTC, ETH, SOL, and 113 more. Returns USD price, 24h change, volume, market cap. FREE.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax coins to return (default all)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical information absent from annotations: cost ('FREE'), exact scope (116 coins), and return structure. Annotations cover safety profile (readOnly/idempotent); description adds operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tight sentences. First establishes scope/examples, second establishes return values and cost. Zero waste; every word aids selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a read-only data retrieval tool with output schema present. Mentions real-time nature and key metrics. Could improve by mentioning rate limits or data freshness, but strong given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'limit' fully described as 'Max coins to return'. Description mentions '116 coins' which implicitly contextualizes the parameter, meeting baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') + resource ('crypto prices') + scope ('116 coins' with examples BTC/ETH/SOL). 'Real-time' effectively distinguishes from historical-data siblings like get_ohlcv.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists specific return fields (USD price, 24h change, volume, market cap) which clearly signals this is for current market snapshots vs analysis tools like get_correlations or sentiment tools like get_fear_greed. Lacks explicit 'when-not-to-use' statements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_regime_historyRegime HistoryA
Read-onlyIdempotent
Inspect

HMM market-regime timeline — accumulation, markup, distribution, markdown, high_volatility per timestamp with confidence. transitions_only returns only regime-change events for strategy-trigger research. $0.05

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoLook-back days (1-90)
transitions_onlyNoOnly regime-change events
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations by specifying the cost ('$0.05'), which informs rate limits or pricing considerations, and clarifies the data scope ('per timestamp with confidence'). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core functionality, uses efficient phrasing, and every sentence adds value (e.g., cost information and parameter context). It avoids redundancy and is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and full schema coverage, the description is largely complete. It adds cost and usage context, though without an output schema, it could benefit from hints about return format (e.g., timestamps, confidence scores). Still, it provides sufficient guidance for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, fully documenting both parameters ('days' and 'transitions_only'). The description adds minimal semantics by explaining the purpose of 'transitions_only' ('returns only regime-change events for strategy-trigger research'), but does not provide additional details beyond what the schema already offers, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('HMM market-regime timeline') and resources ('accumulation, markup, distribution, markdown, high_volatility per timestamp with confidence'), distinguishing it from siblings like 'get_market_regime' by emphasizing historical timeline data rather than current regime status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the 'transitions_only' option ('for strategy-trigger research'), but it does not explicitly state when to use this tool versus alternatives like 'get_market_regime' or other historical data tools, nor does it mention exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_signal_feedSignal Feed Multi-CoinB
Read-onlyIdempotent
Inspect

Multi-coin ML trading signals — direction, probability, confidence, track record. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoComma-separated coins. Omit for all
actionable_onlyNoOnly actionable signals
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide comprehensive behavioral hints (readOnly, openWorld, idempotent, non-destructive). The description adds minimal context about cost ('$0.01') but doesn't disclose rate limits, authentication needs, or what 'actionable' signals mean. It doesn't contradict annotations, but adds little value beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief (two phrases) and front-loaded with the core purpose. The cost information is relevant but could be better integrated. No wasted words, though the structure could be slightly more polished.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with comprehensive annotations and full parameter documentation, the description is minimally adequate. However, with no output schema and multiple similar sibling tools, it should provide more context about output format and differentiation from alternatives to be truly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters. The description adds no parameter-specific information beyond what's in the schema. The baseline of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'Multi-coin ML trading signals' with specific attributes (direction, probability, confidence, track record), which is a specific verb+resource combination. However, it doesn't explicitly distinguish itself from sibling tools like 'get_signal_feed_coin' or 'get_ml_signal', which appear to be related signal tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools that appear related to signals (get_signal_feed_coin, get_ml_signal, get_signals), there's no indication of when this multi-coin feed is preferred over single-coin or other signal tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_signal_feed_coinSignal Feed Single CoinB
Read-onlyIdempotent
Inspect

Single coin ML signal with V3 model details, variant comparison, regime, and track record. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin symbol (e.g. BTC)BTC
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. It adds context by specifying the cost ('$0.01'), which is useful for usage decisions. However, it lacks details on rate limits, response format, or error handling, limiting its added value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core purpose in a single sentence followed by cost. It avoids unnecessary details, though it could be slightly more structured (e.g., separating functionality from cost). Overall, it's efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (ML signals with multiple details), annotations provide good behavioral coverage, but there's no output schema. The description mentions key components (V3 model, variant comparison, etc.) but doesn't explain the return format or data structure, leaving gaps for an agent to interpret results. It's adequate but incomplete for full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'coin' well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, such as examples or constraints. Baseline score of 3 is appropriate since the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: provides 'Single coin ML signal with V3 model details, variant comparison, regime, and track record.' It specifies the resource (single coin) and verb (get ML signal with details), though it doesn't explicitly differentiate from siblings like 'get_ml_signal' or 'get_signal_feed' beyond mentioning 'single coin.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It mentions 'single coin' but doesn't compare to siblings such as 'get_signal_feed' (likely for multiple coins) or 'get_ml_signal' (possibly general ML signals). No explicit when/when-not instructions or prerequisites are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_signalsTrading SignalsC
Read-onlyIdempotent
Inspect

Get real-time trading signals — funding extreme, liquidation cascade, confluence. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoFilter by coin symbol (BTC, ETH, SOL)
severityNoMinimum severity: low, medium, high, critical
signal_typeNoFilter by type: confluence, funding_extreme, liquidation_cascade, volume_anomaly
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With zero annotations, the description carries full disclosure burden. It adds temporal context ('real-time'), enumerates signal categories, and hints at cost ('$0.005'), but omits rate limits, caching behavior, return format, and whether signals are aggregated or per-symbol.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact single sentence with front-loaded verb. Every element serves a purpose: the em-dash enumerates signal types and the trailing price indicates cost. Structure is efficient though '$0.005' placement slightly obscures its meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero annotations, 0% schema coverage, and no output schema, the description fails to document parameter semantics, filtering logic, or response structure. Critical gaps remain for a tool with moderate complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage for 3 parameters (symbol, severity, signal_type). Description mentions signal subtypes ('funding extreme', etc.) that likely map to signal_type but doesn't explicitly document severity levels or symbol format expectations, providing minimal compensation for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action 'Get' and resource 'real-time trading signals', listing concrete subtypes (funding extreme, liquidation cascade, confluence) that differentiate it from generic siblings like get_ml_signal or get_funding. However, boundary versus get_signal_backtest remains slightly ambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus get_ml_signal, get_anomalies, or get_funding. No prerequisites, filtering recommendations, or alternative tool suggestions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_stablecoinsStablecoin DataA
Read-onlyIdempotent
Inspect

Get stablecoin market data — USDT/USDC supply, dominance, mint/burn activity. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
detailNoInclude per-chain breakdown
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses what specific metrics are returned (supply, dominance, mint/burn activity) which is valuable behavioral context. However, with no annotations provided, the description fails to explain the cryptic '$0.005' suffix (likely cost) or disclose read-only status, rate limits, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with the specific resource and metrics. Highly efficient structure. Minor deduction for the unexplained '$0.005' trailing element which creates ambiguity without context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter tool: it explains what data categories are retrieved to compensate for lack of output schema. However, the unexplained pricing/value suffix and absence of return structure details leave gaps in agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present. Baseline score of 4 applies as per rules for parameter-less tools. The description implies no user filtering is possible, which matches the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specifies exact verb (Get), resource (stablecoin market data), and scope (USDT/USDC supply, dominance, mint/burn). The mention of specific metrics clearly distinguishes it from sibling tools like get_prices (general pricing) or get_onchain_btc (Bitcoin-specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use (when needing supply, dominance, or mint/burn metrics for major stablecoins). While it does not explicitly name alternatives to avoid, the specific data types listed make the tool's purpose distinct from general market data tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_strategiesStrategy PerformanceA
Read-onlyIdempotent
Inspect

Get strategy performance metrics — win rate, Sharpe, drawdown for all active strategies. $0.05

ParametersJSON Schema
NameRequiredDescriptionDefault
strategyNoFilter by strategy name
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations cover the safety profile (read-only, non-destructive, idempotent), the description adds critical behavioral context: the specific metrics returned (win rate, Sharpe, drawdown) and the explicit cost ('$0.05') which is not present in annotations or schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: front-loaded purpose, specific metrics enumerated via em-dash, scope clarified ('all active strategies'), and cost appended. Zero wasted words; every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with one optional parameter, the description adequately compensates for the missing output schema by listing the specific metrics returned and disclosing the $0.05 cost. Combined with comprehensive annotations, no additional description is necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (one optional 'strategy' parameter fully documented as 'Filter by strategy name'), the schema carries the semantic load. The description adds context by implying the default behavior returns data for 'all active strategies' when no filter is applied.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get'), resource ('strategy performance metrics'), and specific data points returned (win rate, Sharpe, drawdown). However, it lacks explicit differentiation from siblings like `get_track_record` or `run_strategy_test`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like `run_backtest`, `run_strategy_test`, or `get_track_record`. It does not indicate prerequisites or conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_subnet_leaderboardSubnet LeaderboardA
Read-onlyIdempotent
Inspect

Top-N miners for any Bittensor subnet ranked by incentive. Returns rank, uid, hotkey, stake, trust, incentive, emission, consensus, dividends, validator_permit per miner. On-chain metagraph data. Operator-UID excluded from SN123 output. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoTop-N miners (1-256)
netuidYesSubnet netuid
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it specifies that operator-UID is excluded from SN123 output, mentions on-chain metagraph data source, and includes a cost ($0.005) which helps the agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose and return data, data source and exclusions, and cost. Each sentence adds meaningful information with minimal waste, though the cost mention could be integrated more smoothly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with comprehensive annotations and full schema coverage, the description provides good contextual completeness. It explains what data is returned, source constraints, and operational cost. The main gap is lack of output schema, but the description compensates by listing return fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters (limit and netuid). The description doesn't add parameter-specific information beyond what's in the schema, so it meets the baseline expectation without adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving top-N miners for any Bittensor subnet ranked by incentive. It specifies the exact data returned (rank, uid, hotkey, etc.) and distinguishes it from siblings by focusing on subnet leaderboard data rather than other Bittensor metrics like validators or subnets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (on-chain metagraph data, operator-UID exclusion for SN123) but doesn't explicitly state when to use this tool versus alternatives like get_bittensor_validators or get_subnets. It mentions a cost ($0.005) which provides some operational guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_subnetsBittensor SubnetsA
Read-onlyIdempotent
Inspect

List all tracked Bittensor subnets with live metagraph metadata — netuid, neuron count, active miners, total stake, gini incentive, top-5 share, alpha price. Includes network-wide subnet count + current block from our own Subtensor lite node. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent, and open-world operation, covering safety and idempotency. The description adds valuable context beyond annotations by specifying the data source ('our own Subtensor lite node'), cost ('$0.01'), and the scope of metadata included, which helps the agent understand the tool's behavior and limitations without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first clause, followed by specific data details and additional context, all in a single, efficient sentence. Every part adds value, such as listing metadata fields, mentioning the data source, and noting cost, with no redundant or wasted information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, rich annotations, and no output schema, the description provides strong context by detailing the returned data and source. However, it does not explain the output format (e.g., structure of the list) or potential limitations (e.g., data freshness), which could be helpful for an agent invoking the tool, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description compensates by implicitly indicating no parameters are needed (as it lists data without mentioning inputs), which aligns with the schema. However, it does not explicitly state 'no parameters required,' leaving a minor gap in clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all tracked Bittensor subnets'), specifying the exact data returned ('live metagraph metadata — netuid, neuron count, active miners, total stake, gini incentive, top-5 share, alpha price') and additional network-wide information ('subnet count + current block from our own Subtensor lite node'). It distinguishes itself from sibling tools like 'get_subnet_leaderboard' by focusing on comprehensive metadata listing rather than ranking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it provides 'live metagraph metadata' and network-wide data, suggesting it's for real-time monitoring or analysis of Bittensor subnets. However, it does not explicitly mention when to use this tool versus alternatives like 'get_subnet_leaderboard' or 'get_bittensor', nor does it specify exclusions or prerequisites, leaving some ambiguity in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_subtensorSubtensor NetworkB
Read-onlyIdempotent
Inspect

Get Bittensor subtensor chain status from own lite node — block height, peers, sync. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
detailNoInclude subnet list
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only/idempotent safety, so the description adds valuable context: the data source (lite node), specific returned fields (block height, peers, sync), and explicit cost ($0.01) not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently front-loaded with the action and scope; the em-dash concisely lists return fields. The '$0.01' is terse but earns its place as cost information, though labeling it explicitly would improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 1-parameter tool, the description adequately compensates for the missing output schema by listing return values and noting cost. However, it could briefly mention the subnet list expansion behavior in the prose for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage and only one boolean parameter, the schema fully documents the 'detail' option ('Include subnet list'). The description does not add parameter semantics beyond the schema, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb (Get), the exact resource (Bittensor subtensor chain status), and the source (own lite node), distinguishing it from generic network tools. However, it doesn't explicitly differentiate from the sibling 'get_bittensor' tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The '$0.01' provides cost awareness for the open-world operation, but there is no guidance on when to use this versus 'get_bittensor' or other network tools, and no mention of when to enable the 'detail' parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tao_burn_historyTAO Burn HistoryA
Read-onlyIdempotent
Inspect

Subnet-creation burn cost history. Rising burn = investor appetite for new subnets, falling = cooling. Returns last N hours of burn values with rolling min/max/avg. Source: our Subtensor node. $0.03

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoLook-back hours (1-720)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond this by explaining the economic interpretation of burn trends, specifying the data source ('our Subtensor node'), and mentioning cost ('$0.03'), which aids in understanding the tool's practical use and limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by interpretive context, data details, and cost, all in three concise sentences with no wasted words, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity, rich annotations covering safety and behavior, and no output schema, the description is mostly complete. It explains what the tool does, why it matters, and practical details, though it could briefly mention the format of returned data (e.g., time-series structure) for full clarity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the 'hours' parameter. The description adds minimal semantics by referencing 'last N hours' and burn values, but does not provide additional details beyond what the schema already specifies, such as format or implications of the rolling statistics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('returns last N hours of burn values') and resources ('subnet-creation burn cost history'), distinguishing it from siblings by focusing on TAO burn metrics rather than general blockchain data, prices, or other analytics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage ('Rising burn = investor appetite for new subnets, falling = cooling') and implies this tool is for analyzing burn trends, but does not explicitly state when to use alternatives or exclude specific scenarios, such as comparing with other subnet-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_token_unlocksToken UnlocksA
Read-onlyIdempotent
Inspect

Get upcoming token unlock schedule — cliff/linear vesting, USD value, supply impact. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoUpcoming days to show (1-90)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety, allowing the description to focus on content semantics. It effectively discloses what data is returned (cliff vs linear vesting, USD value, supply impact) since no output schema exists, providing necessary behavioral context about the tool's output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with information front-loaded. The em-dash list efficiently enumerates return data types. The '$0.005' fragment at the end is structurally ambiguous (likely cost hint) but does not significantly burden the description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single optional parameter and lack of output schema, the description adequately covers the tool's scope by enumerating the specific financial metrics and vesting types returned. For a read-only data retrieval tool, this is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'days' parameter fully documented ('Upcoming days to show'). The description mentions 'upcoming' which aligns with the parameter but adds no additional syntax, constraints, or usage guidance beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('token unlock schedule') and details specific data points returned (cliff/linear vesting, USD value, supply impact), distinguishing it from sibling price/market tools. However, the trailing '$0.005' fragment is unexplained and slightly confusing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. With many sibling data tools (get_prices, get_ohlcv, get_market_score), the description does not clarify why one would select this specific tool over alternatives, though the content (vesting schedules) implies a specific use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_track_recordTrack RecordA
Read-onlyIdempotent
Inspect

Get verified ML prediction track record — accuracy, win rate, historical predictions with outcomes. $0.005

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoHistory days (1-365)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses critical cost information ($0.005) and implied read-only behavior via 'Get', but omits safety details, authorization requirements, rate limits, or data freshness that would normally appear in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: front-loaded action, em-dash separator for specific metrics, trailing cost disclosure. No filler words; every token conveys purpose, output structure, or cost.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Absent output schema and annotations, the description compensates adequately by enumerating return contents (accuracy, win rate, outcomes) and cost. Slightly short of perfect only because verification criteria and temporal scope remain unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, warranting baseline 4. Schema is empty object; description appropriately focuses on return value semantics since no input parameter documentation is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' and resource 'ML prediction track record' with precise output metrics (accuracy, win rate, historical predictions). Distinguishes from siblings like get_ml_signal (current) and run_backtest (simulation) by emphasizing verified historical performance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Offers implied usage via specificity—appropriate when seeking historical accuracy verification—but lacks explicit when/when-not guidance or named alternatives (e.g., 'use get_ml_signal for current predictions instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_utxo_setUTXO AnalysisB
Read-onlyIdempotent
Inspect

Get BTC UTXO set analysis — total UTXOs, supply distribution, value bands. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoHistory days (1-30)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and openWorldHint. The description adds value by specifying what metrics are returned (supply distribution, value bands) but does not address data freshness, sourcing, or the implications of openWorldHint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely brief and front-loaded with the action and resource. However, the inclusion of '$0.01' at the end appears to be an error or pricing artifact that wastes space and may confuse the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one optional parameter and no output schema, the description adequately specifies what analysis is returned (total UTXOs, supply distribution, value bands), providing sufficient context for tool selection despite not describing the return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'days' fully documented as 'History days (1-30)'. The description does not mention the parameter, but with complete schema coverage, no additional parameter guidance is necessary, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool 'Get BTC UTXO set analysis' with specific outputs (total UTXOs, supply distribution, value bands), distinguishing it from sibling tools like get_btc_onchain or get_blocks. However, the trailing '$0.01' appears to be noise or metadata leakage that slightly detracts from clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like get_btc_onchain or get_blocks, nor does it mention prerequisites or specific use cases for UTXO analysis versus general on-chain data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_flowWhale TrackingC
Read-onlyIdempotent
Inspect

Get whale movement tracking — large transactions, exchange flows, accumulation patterns. $0.01

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain to track: BTC, solana, ethereumBTC
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only hints at data content (transaction types tracked) without explaining critical behaviors like data freshness, pagination, rate limits, or return format. The '$0.01' text is unexplained—if it indicates cost or minimum thresholds, it is insufficiently specified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the description contains the unexplained fragment '$0.01' which appears to be noise, metadata, or a critical value that is improperly contextualized. This pollutant prevents the description from being considered well-structured despite its short length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's single parameter with zero schema documentation and the absence of an output schema or annotations, the description should have documented the parameter semantics and basic output characteristics. It provides domain context but leaves critical implementation gaps that would hinder correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the 'chain' parameter, and the description completely fails to compensate by mentioning this parameter, its purpose, or valid values. Although the schema reveals a default of 'BTC', the agent receives no semantic guidance from the description about what the chain parameter controls or which chains are supported.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the resource (whale movement tracking) and details the data types covered (large transactions, exchange flows, accumulation patterns), providing clear domain context. However, the unexplained '$0.01' creates confusion, and it fails to distinguish this 'get' operation from the sibling 'track_whale_flow' tool or explain when to prefer one over the other.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided for when to use this tool versus alternatives like 'track_whale_flow', 'track_exchange_flows', or 'browse_chain_activity'. There are no prerequisites, filtering recommendations, or use-case scenarios mentioned that would help an agent select this tool correctly from the extensive sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_backtestSignal BacktestC
Read-onlyIdempotent
Inspect

Get historical signal backtest — accuracy, timing analysis, out-of-sample validation. $0.10

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin to backtest (BTC, ETH, SOL)BTC
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It successfully discloses cost ($0.50) and key output metrics (win rate, Sharpe ratio, max drawdown), but omits execution details like duration, sync/async nature, idempotency, or side effects that would be crucial for a backtest operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely tight structure with zero waste. Every element earns its place: action, outputs, valid inputs, and cost. Front-loaded with the dash format separating operation from details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex operation with zero schema coverage, no annotations, and no output schema, the description is incomplete. Critical gaps include: missing sibling differentiation (run_strategy_backtest conflict), unexplained parameters (coin, days), and no execution model guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, so description must compensate heavily. It lists specific strategy values (analyst_v3, etc.) helping that parameter, but provides no semantic information for 'coin' or 'days' parameters, leaving two-thirds of the interface undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (run) and resource (strategy backtest) plus output metrics, but fails to differentiate from sibling 'run_strategy_backtest' which has nearly identical naming and function. The strategy list provides some specifics but doesn't resolve the critical sibling ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists valid strategies (analyst_v3, funding_monitor, ml_ensemble) implying when to use, but provides no explicit guidance on when to choose this tool versus similar siblings like 'run_strategy_backtest' or 'backtest_signal'. No prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_strategy_testStrategy BacktestAInspect

Run strategy backtest — win rate, Sharpe ratio, max drawdown. Strategies: analyst_v3, funding_monitor, ml_ensemble. $0.50

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoCoin to backtest (BTC, ETH, SOL)BTC
daysNoBacktest period in days (7-365)
strategyNoStrategy: analyst_v3, funding_monitor, ml_ensembleanalyst_v3
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate execution (readOnly=false, idempotent=false). Description adds critical behavioral context not in annotations: explicit cost '$0.50' and specific return metrics (win rate, Sharpe, drawdown). Does not address idempotency implications or open-world limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: lead verb phrase, em-dash list of outputs, period-separated strategy list and cost. Every token conveys essential information with zero redundancy. Well front-loaded with action and outcomes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 3-parameter tool with 100% schema coverage and no output schema: describes the computed outputs (metrics) and cost. Missing sibling differentiation and failure modes (e.g., what happens with insufficient data).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete descriptions for coin, days, and strategy parameters. Description duplicates the strategy enum values already present in the schema but adds no additional semantic depth (e.g., what the strategies do, date format details). Baseline score appropriate when schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Run' + resource 'strategy backtest' with clear output metrics (win rate, Sharpe ratio, max drawdown). Lists available strategies. However, fails to distinguish from sibling tool 'run_backtest' which has an ambiguously similar name and purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus the similar 'run_backtest' sibling, nor any mention of prerequisites (e.g., needing a strategy selected) or cost considerations beyond the price point. Simply lists available strategies without selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_arbitrageFunding ArbitrageA
Read-onlyIdempotent
Inspect

Get funding rate arbitrage opportunities — cross-exchange spreads, OI imbalance signals. $0.02

ParametersJSON Schema
NameRequiredDescriptionDefault
min_spreadNoMin funding spread to show
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context beyond annotations: '$0.02' likely indicates cost/pricing information, and 'OI imbalance signals' clarifies specific signal types returned. Annotations cover safety (readOnly, idempotent), so the description appropriately supplements with operational/cost context rather than repeating safety flags.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: single sentence front-loaded with action ('Get funding rate arbitrage opportunities'), followed by em-dash clarification of specific signal types, and ending with cost indicator ('$0.02'). No waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a read-only扫描 tool with one optional parameter and no output schema. Description covers purpose, specific signal types (spreads, OI imbalances), and cost. Could benefit from explicitly stating it returns a list of opportunities, but sufficient given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage ('Min funding spread to show'), establishing solid baseline. Description mentions 'cross-exchange spreads' which semantically contextualizes the min_spread parameter, but doesn't elaborate on parameter format, units, or usage beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Get funding rate arbitrage opportunities' provides clear verb+resource, while 'cross-exchange spreads, OI imbalance signals' distinguishes it from sibling tools like get_funding_rates or get_open_interest that likely return raw market data rather than arbitrage opportunities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by specifying 'arbitrage opportunities' (suggesting use for trading opportunities rather than raw data), but lacks explicit when-to-use guidance or differentiation from get_funding_rates despite having many similar siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources