Skip to main content
Glama

agentdata-mcp

Server Details

Crypto market data for AI agents via x402. 16 tools: prices, funding, DeFi yields, arbitrage, TA.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
speteai/agentdata-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.8/5 across 16 of 16 tools scored. Lowest: 1.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting specific crypto market data aspects like arbitrage, prices, yields, sentiment, or technical indicators. There is no overlap in functionality; for example, get_crypto_prices focuses on real-time prices, while get_technical_indicators provides analysis metrics, preventing agent confusion.

Naming Consistency5/5

All tool names follow a consistent 'get_' prefix with descriptive nouns (e.g., get_arbitrage_opportunities, get_base_activity), using snake_case uniformly. This predictable pattern makes it easy for agents to understand and navigate the toolset without naming conflicts.

Tool Count4/5

With 16 tools, the count is slightly high but reasonable for a comprehensive crypto data server covering diverse aspects like arbitrage, yields, sentiment, and technical analysis. It avoids being overwhelming by maintaining clear scopes, though it borders on the upper limit of typical well-scoped sets (3-15 tools).

Completeness5/5

The toolset provides complete coverage for crypto market data analysis, including real-time prices, historical data, arbitrage, yields, sentiment, technical indicators, and risk metrics like liquidation levels. There are no obvious gaps; agents can perform end-to-end analysis without dead ends in this domain.

Available Tools

16 tools
get_arbitrage_opportunitiesCInspect

Cross-exchange spreads MEXC/Binance/Bybit/OKX ($0.003 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions cross-exchange spreads and a threshold, but lacks details on behavior such as data sources, update frequency, rate limits, or error handling. This leaves significant gaps in understanding how the tool operates.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that conveys key information (exchanges and threshold) without waste. It could be slightly more structured by including a verb, but it's appropriately sized and front-loaded with essential details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and moderate complexity (involving multiple exchanges and thresholds), the description is incomplete. It doesn't explain return values, data format, or behavioral traits, leaving the agent with insufficient information to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema coverage, so no parameter documentation is needed. The description adds context about exchanges and threshold, which is helpful but not required for parameters. Baseline is 4 as it compensates appropriately for the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool identifies cross-exchange spreads between specific exchanges (MEXC/Binance/Bybit/OKX) with a minimum threshold ($0.003 USDC), which gives a general purpose. However, it lacks a clear verb (e.g., 'fetch' or 'calculate') and doesn't explicitly distinguish from siblings like get_crypto_prices or get_dex_vs_cex, making it somewhat vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description implies it's for arbitrage opportunities across exchanges, but it doesn't specify scenarios, prerequisites, or exclusions, leaving the agent to infer usage without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_base_activityDInspect

Base network TPS + block activity ($0.002 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but adds minimal value. It hints at a cost ('$0.002 USDC'), which might imply a paid service or rate limit, but doesn't explain what this means operationally (e.g., per call, subscription). No other traits like response format, error handling, or data freshness are mentioned, making it inadequate for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is overly concise to the point of under-specification. A single phrase 'Base network TPS + block activity ($0.002 USDC)' lacks structure and fails to front-load key information—it mixes purpose with cost without clear separation. While brief, it doesn't earn its place by providing sufficient clarity or guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (implied by network activity metrics) and lack of annotations and output schema, the description is incomplete. It doesn't explain what data is returned (e.g., TPS values, block details, timestamps), how it's formatted, or any limitations. The cost mention adds some context but is insufficient for a tool that likely provides detailed metrics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add any parameter information, which is acceptable here. A baseline score of 4 is appropriate as the schema fully handles the parameter semantics, and the description doesn't need to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Base network TPS + block activity ($0.002 USDC)' is vague and tautological—it restates the tool name 'get_base_activity' without clearly specifying what action it performs. It mentions 'network TPS + block activity' but doesn't define what 'get' entails (e.g., retrieve, calculate, or monitor), and the cost reference adds confusion rather than clarifying purpose. No differentiation from sibling tools is provided.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It doesn't mention any context, prerequisites, or exclusions, and fails to reference sibling tools like 'get_gas_prices' or 'get_historical' that might relate to network activity. This leaves the agent with no usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_correlationBInspect

30-day price correlation matrix ($0.001 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a cost ('$0.001 USDC'), which is useful context, but doesn't cover other traits like rate limits, authentication needs, or what the output looks like (e.g., matrix format). For a tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's appropriately sized and front-loaded, clearly stating the core functionality and cost without unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of financial correlation analysis and the lack of annotations and output schema, the description is incomplete. It doesn't explain what assets are included in the matrix, the time frame details beyond '30-day', or the return format, leaving the agent with insufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema already documents this fully. The description adds no parameter information, which is appropriate here. Baseline for 0 parameters is 4, as no compensation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates a '30-day price correlation matrix' with a specific cost ('$0.001 USDC'), which is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like get_historical or get_technical_indicators that might also involve price analysis, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention context, prerequisites, or exclusions, leaving the agent to infer usage based on the name alone among many financial data siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_crypto_pricesBInspect

Real-time prices for BTC, ETH, SOL, BNB, XRP ($0.001 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'real-time' (implying current data) and a cost ($0.001 USDC), which adds some behavioral context. However, it lacks details on rate limits, error handling, data freshness, or response format, leaving significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality ('Real-time prices for BTC, ETH, SOL, BNB, XRP') and includes cost information. Every word earns its place with no redundancy or fluff, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation with no parameters) and lack of annotations/output schema, the description is minimally adequate. It covers what cryptocurrencies are supported and cost, but misses details like data sources, update frequency, or example outputs. For a tool with no structured data, it should provide more context to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it appropriately doesn't mention any. Baseline is 4 for zero parameters, as it avoids unnecessary information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides real-time prices for specific cryptocurrencies (BTC, ETH, SOL, BNB, XRP), with a specific cost mentioned ($0.001 USDC). It uses a specific verb ('get') and resource ('crypto prices'), but doesn't explicitly differentiate from siblings like 'get_market_overview' or 'get_historical' which might also provide price data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or compare to siblings like 'get_historical' (for historical prices) or 'get_market_overview' (for broader market data). The description only lists what it does, not when it's appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_defi_yieldsBInspect

Top DeFi yields (Aave/Compound/Morpho/Pendle) ($0.002 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions a cost ('$0.002 USDC'), which hints at potential rate limits or pricing, but doesn't disclose other behavioral traits like data freshness, format, error handling, or whether it's read-only. For a tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It could be slightly more structured by separating cost info, but it's appropriately sized with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and zero parameters, the description is minimal. It covers what data is retrieved and cost, but lacks details on return format, data scope, or behavioral context, making it incomplete for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add param info, which is appropriate here, but it does include cost details, slightly enhancing context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'Top DeFi yields' and specifies the protocols covered (Aave/Compound/Morpho/Pendle), providing a specific verb ('get') and resource. However, it doesn't explicitly differentiate from sibling tools like get_crypto_prices or get_market_overview that might also provide financial data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It doesn't mention context (e.g., for investment analysis, risk assessment) or exclusions, and with many sibling tools providing related financial data, the lack of differentiation leaves usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_dex_vs_cexCInspect

DEX vs CEX price comparison ($0.003 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a cost ('$0.003 USDC'), which adds useful context about potential charges, but fails to describe other traits such as data sources, update frequency, error handling, or output format, leaving significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the purpose and cost without unnecessary details. It is front-loaded and wastes no words, though it could be slightly more informative without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of price comparison tools and the lack of annotations and output schema, the description is incomplete. It misses key details like what assets are compared, timeframes, data sources, or output structure, which are essential for effective use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, and the cost mention adds value beyond the schema, justifying a baseline above 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool performs a 'DEX vs CEX price comparison' with a specific cost ('$0.003 USDC'), which clarifies the verb (compare) and resource (prices). However, it does not differentiate from sibling tools like 'get_crypto_prices' or 'get_arbitrage_opportunities', leaving ambiguity about scope or methodology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description mentions a cost, which implies a usage consideration, but does not specify contexts, prerequisites, or exclusions relative to sibling tools like 'get_crypto_prices' or 'get_arbitrage_opportunities'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_funding_ratesBInspect

Perpetual funding rates with long/short signals ($0.001 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the pricing ('$0.001 USDC'), which hints at potential costs, but doesn't cover critical aspects like rate limits, authentication needs, data freshness, or whether this is a read-only operation. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that efficiently conveys the core purpose and pricing information without any wasted words. It's front-loaded with the main functionality, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple data retrieval with no parameters) and the lack of annotations and output schema, the description is minimally adequate. It specifies what data is returned and pricing, but doesn't cover behavioral aspects like rate limits or data format. For a no-parameter tool, this is acceptable but leaves room for improvement in transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description adds value by specifying the data type ('Perpetual funding rates with long/short signals') and pricing details, which goes beyond the empty schema. A baseline of 4 is appropriate for zero-parameter tools with good schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'Perpetual funding rates with long/short signals' and specifies the pricing unit ('$0.001 USDC'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like get_historical or get_market_overview that might also provide funding-related data, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It doesn't mention context, prerequisites, or exclusions, nor does it reference sibling tools like get_historical (which might provide historical funding rates) or get_market_overview (which could include funding data). This leaves the agent with minimal usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_gas_pricesBInspect

Multi-chain gas prices Base/ETH/SOL ($0.001 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the cost metric ($0.001 USDC), which hints at potential usage costs, but doesn't specify rate limits, authentication requirements, data freshness, or error conditions. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single phrase that efficiently conveys the core functionality, blockchains, and cost. Every word earns its place, and it's front-loaded with essential information without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema) and lack of annotations, the description is minimally adequate. It specifies the blockchains and cost, but for a data retrieval tool, it could benefit from more context on data format, update frequency, or limitations. The absence of an output schema means the description doesn't explain return values, which is a gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, and it adds value by specifying the blockchains covered (Base/ETH/SOL) and cost metric, which isn't captured in the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving gas prices for multiple blockchains (Base, ETH, SOL) with a specific cost metric ($0.001 USDC). It uses specific verbs ('get' implied) and resources (gas prices), though it doesn't explicitly differentiate from sibling tools like get_crypto_prices or get_base_activity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention sibling tools or contextual factors that would help an agent choose between this and other data retrieval tools like get_crypto_prices or get_base_activity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_historicalCInspect

Historical OHLCV candles ($0.005 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
symbolNoBTCUSDT
intervalNo1d
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a cost ('$0.005 USDC') which is valuable context about pricing/rate limits, but doesn't describe what the tool returns (candle format, timestamps, data structure), whether it requires authentication, pagination behavior, or error conditions. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief (5 words plus cost) which could be appropriate if it were more informative. However, given the tool's complexity and lack of other documentation, this brevity becomes under-specification rather than efficient communication. The cost information is useful but the core purpose statement is minimal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with no annotations, no output schema, and 0% schema description coverage, the description is inadequate. It mentions the data type and cost but doesn't explain parameter meanings, return format, usage context among siblings, or behavioral characteristics. The agent would struggle to use this tool correctly without significant guessing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and 3 parameters, the description provides no information about any parameters. It doesn't explain what 'symbol', 'interval', or 'limit' mean, their significance in the context of OHLCV candles, or how they affect the output. The description fails to compensate for the complete lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'Historical OHLCV candles' which is a specific financial data resource with a verb 'get' implied by the name. It distinguishes from siblings by focusing on historical price data rather than other crypto metrics like sentiment, yields, or volatility. However, it doesn't explicitly mention the verb 'get' or 'retrieve' in the description text itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With 15 sibling tools covering various crypto data aspects, there's no indication of when historical OHLCV candles are needed versus other price-related tools like 'get_crypto_prices' or technical analysis tools like 'get_technical_indicators'. No prerequisites, exclusions, or complementary tools are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_liquidation_levelsBInspect

Estimated liquidation zones by leverage 5x/10x/20x ($0.002 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool provides 'estimated' data, implying it's a read-only operation with no destructive effects, but doesn't clarify if it's real-time, historical, or predictive, or if there are rate limits, authentication needs, or data freshness constraints. For a financial data tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that packs essential information: what the tool provides (estimated liquidation zones), the key dimensions (leverage levels), and the currency context. There is no wasted verbiage, and it's front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters, no annotations, and no output schema, the description is minimally adequate. It explains what the tool returns (liquidation zones by leverage with currency), but doesn't cover output format, data sources, or reliability. For a tool with no structured metadata, it meets the bare minimum but lacks depth for confident agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and schema description coverage is 100%, so there are no parameters to document. The description adds value by specifying the leverage levels (5x/10x/20x) and currency unit ($0.002 USDC), which are likely implicit in the tool's behavior rather than inputs. This exceeds the baseline of 3 for high schema coverage with no parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to provide estimated liquidation zones by specific leverage levels (5x/10x/20x) with a currency unit specified ($0.002 USDC). It uses specific terminology ('liquidation zones', 'leverage') that distinguishes it from sibling tools focused on arbitrage, prices, yields, etc. However, it doesn't explicitly differentiate from all siblings, particularly those that might also provide liquidation-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context for usage, or comparisons with sibling tools. The agent must infer usage based on the purpose alone, which is insufficient for optimal tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_overviewBInspect

Full market overview with sentiment + arb signals ($0.002 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a cost ('$0.002 USDC'), which is useful context about potential charges. However, it doesn't describe what 'full market overview' entails, the format of the output, rate limits, or any other behavioral traits beyond the cost implication.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of a single sentence that efficiently communicates the tool's purpose and cost. Every word earns its place, with no wasted information or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (market overview with multiple components), lack of annotations, and no output schema, the description is minimally adequate. It covers the purpose and cost but leaves significant gaps in behavioral context (e.g., output format, scope of 'full market overview'). For a tool with no structured output documentation, more detail would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%. Since there are no parameters to document, the description doesn't need to add parameter semantics. The baseline for 0 parameters is 4, as the description appropriately focuses on the tool's purpose rather than parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to provide a 'full market overview with sentiment + arb signals'. It specifies the resource (market overview) and key components (sentiment, arbitrage signals), though it doesn't explicitly differentiate from sibling tools like 'get_sentiment' or 'get_arbitrage_opportunities'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it mentions the cost ('$0.002 USDC'), this doesn't help the agent choose between this tool and its many siblings (e.g., get_sentiment, get_arbitrage_opportunities, get_crypto_prices). No explicit when/when-not or alternative recommendations are included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sentimentCInspect

Fear & Greed + composite sentiment ($0.001 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions a cost ('$0.001 USDC'), which hints at potential rate limits or pricing, but doesn't describe what the tool does (e.g., returns a numeric score, text analysis, or composite data), how it behaves (e.g., real-time vs. cached), or any other traits like error handling or data sources. This leaves significant gaps in understanding the tool's operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with a single phrase, but it's not front-loaded with clear purpose. The inclusion of cost ('$0.001 USDC') adds some value but doesn't prioritize essential information like what the tool does. It avoids waste but under-specifies key details, making it less effective than a more structured approach.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (no parameters, no output schema, no annotations), the description is incomplete. It fails to explain what the tool returns (e.g., sentiment scores, data format) or its behavior, leaving the agent unsure of how to interpret results. The cost hint is insufficient to compensate for the lack of output information and behavioral context, making it inadequate for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is acceptable given the lack of parameters. It includes cost information, which provides some context beyond the schema, but this is minimal. A baseline of 4 is appropriate as the schema fully covers the parameters (none exist).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Fear & Greed + composite sentiment ($0.001 USDC)' is vague and tautological. It restates the tool name 'get_sentiment' by mentioning 'sentiment' but doesn't specify what action is performed (e.g., retrieve, calculate, analyze) or what resource is involved. It includes cost information but lacks a clear verb+resource statement, making it difficult to distinguish from sibling tools like get_market_overview or get_technical_indicators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention any context, prerequisites, or exclusions, nor does it reference sibling tools. For example, it doesn't clarify if this is for real-time sentiment, historical data, or how it differs from get_market_overview, leaving the agent with no usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_stablecoin_healthBInspect

USDC/DAI depeg monitoring ($0.001 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the monitoring function but doesn't describe what the tool returns (e.g., current status, historical alerts, confidence metrics), how frequently it updates, or any limitations. This leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single phrase that efficiently communicates the core function. Every word earns its place, with no redundant information, making it front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of financial monitoring and the lack of annotations or output schema, the description is incomplete. It doesn't explain what data is returned (e.g., binary status, detailed metrics), how to interpret results, or any dependencies. For a tool in a suite with many siblings, more context is needed to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description adds value by specifying the monitoring threshold ($0.001 USDC), which provides context about what constitutes a depeg event, though this could be considered part of the tool's purpose rather than parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: monitoring USDC/DAI depeg events with a specific threshold ($0.001 USDC). It uses a specific verb ('monitoring') and identifies the resource (USDC/DAI depeg). However, it doesn't explicitly differentiate from sibling tools like 'get_crypto_prices' or 'get_market_overview' that might provide related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this specific depeg monitoring is needed compared to other market data tools in the sibling list, nor does it specify any prerequisites or exclusions for its use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_support_resistanceCInspect

S/R levels via fractal analysis ($0.003 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoBTCUSDT
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but provides minimal behavioral insight. It mentions a cost ('$0.003 USDC'), hinting at potential rate limits or paid usage, but doesn't disclose other traits like response time, error handling, or data freshness. No contradictions exist, but critical details like read/write nature or side effects are missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very brief—two concise phrases—with no wasted words. It's front-loaded with the core purpose. However, it could be more structured by separating purpose from cost details, but overall it's efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (technical analysis tool), lack of annotations, no output schema, and low parameter coverage, the description is incomplete. It doesn't explain what the tool returns, how results are formatted, or any dependencies. The cost mention adds some context but doesn't suffice for a tool likely producing structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, and the tool description adds no information about the 'symbol' parameter. It doesn't explain what symbols are supported, format requirements, or default behavior. Since schema coverage is low, the description fails to compensate, leaving the parameter undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool calculates 'S/R levels via fractal analysis,' which indicates it identifies support and resistance levels using a specific technical method. However, it's vague about what 'S/R levels' are (likely support/resistance) and doesn't specify the output format or data source. It distinguishes from siblings by focusing on technical analysis rather than prices or market data, but lacks explicit comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'get_technical_indicators' or 'get_historical.' The description mentions a cost ('$0.003 USDC'), which implies a usage context related to pricing, but doesn't explain prerequisites, limitations, or ideal scenarios for application.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_technical_indicatorsDInspect

RSI, MACD, Bollinger Bands, ATR ($0.002 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoBTCUSDT
intervalNo1h
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions a cost ($0.002 USDC), which hints at potential rate limits or pricing, but doesn't describe what the tool returns (e.g., values, charts, or signals), error handling, or any other behavioral traits like latency or data freshness. This is insufficient for a tool with no structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short and to the point, listing indicators and cost in a single phrase, which is efficient. However, it's under-specified rather than concise—it lacks necessary details like a verb or usage context, making it feel incomplete rather than optimally structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and low schema coverage, the description is incomplete. It doesn't explain what the tool returns, how to interpret results, or key behavioral aspects. For a tool that likely provides financial data analysis, this leaves significant gaps in understanding its function and output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It adds no information about the 'symbol' or 'interval' parameters beyond what the schema provides (e.g., default values or enum options). Without explaining what these parameters mean in context (e.g., symbol format, interval impact on indicators), the agent lacks semantic understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists technical indicators (RSI, MACD, Bollinger Bands, ATR) and includes a cost ($0.002 USDC), but it doesn't specify a verb or what the tool actually does with these indicators. It's vague about whether it calculates, retrieves, or analyzes them, and doesn't distinguish from siblings like get_historical or get_volatility which might overlap. This is more of a feature list than a purpose statement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives. It doesn't mention context, prerequisites, or exclusions, and with siblings like get_historical or get_volatility that might provide related data, the agent has no help in choosing between them. This leaves usage entirely ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_volatilityBInspect

24h volatility for BTC/ETH/SOL ($0.001 USDC)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the data precision ('$0.001 USDC'), it doesn't describe whether this is real-time or delayed data, what timezone the '24h' refers to, whether there are rate limits, authentication requirements, or what format the output takes. For a financial data tool with zero annotation coverage, this leaves significant behavioral questions unanswered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single phrase that communicates the core functionality efficiently. Every element earns its place: '24h volatility' specifies the metric and timeframe, 'BTC/ETH/SOL' identifies the assets, and '($0.001 USDC)' indicates price precision. There's zero wasted verbiage or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters and no output schema, the description provides adequate basic information about what data will be returned. However, for a financial data tool with no annotations, it should ideally mention data freshness, source, or limitations. The description covers the 'what' but leaves gaps on operational aspects that would help an agent use it effectively in context with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (schema coverage 100%), so the baseline score is 4. The description appropriately doesn't waste space discussing non-existent parameters. The mention of specific cryptocurrencies (BTC/ETH/SOL) and price precision provides context about what data will be returned, which is valuable semantic information beyond the empty parameter schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: provides 24-hour volatility data for specific cryptocurrencies (BTC, ETH, SOL) with a price precision indicator. It uses a specific verb ('volatility for') and identifies the resources (BTC/ETH/SOL). However, it doesn't explicitly distinguish itself from sibling tools like 'get_historical' or 'get_technical_indicators' which might also provide volatility-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools like 'get_historical', 'get_technical_indicators', and 'get_market_overview' that might provide overlapping or related data, there's no indication of when this specific volatility tool is preferred. The description doesn't mention any prerequisites, limitations, or comparative advantages.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.