Skip to main content
Glama

x402 Crypto Market Structure

Server Details

Real-time crypto market intelligence for AI agents. Live price, funding rate, open interest, buy/sell ratio, fear/greed, orderflow, and OHLCV history across 20 exchanges and 24 tokens. Address risk scoring included.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 7 of 7 tools scored. Lowest: 3.8/5.

Server CoherenceC
Disambiguation2/5

Multiple market analysis tools (market_snapshot, market_orderflow, market_full, market_light) have overlapping functionality and descriptions, making it difficult for an agent to distinguish between them. For example, both market_snapshot and market_orderflow include CVD and liquidation pressure data.

Naming Consistency2/5

Tool names lack a consistent pattern: 'market_analyze', 'market_full', 'market_light', 'market_orderflow', and 'market_snapshot' use different suffixes and are mixed with 'address_risk' and 'api_info' which do not follow the market_ prefix.

Tool Count4/5

With 7 tools, the count is appropriate for the scope of a crypto market analysis server. It provides a reasonable variety without being overwhelming.

Completeness3/5

The server covers address risk, macro analysis, orderflow, and snapshots, but lacks historical data beyond limited context windows, on-chain metrics, order book depth, and any execution capabilities. Gaps exist for a comprehensive market structure server.

Available Tools

7 tools
address_riskA
Read-onlyIdempotent
Inspect

Before you transact, know who you're dealing with. Paste any wallet address — EVM or Solana, auto-detected — and get entity label (exchange, protocol, flagged mixer), risk level, account age, transaction history, and top holdings. Flags: new_account, unverified_contract, dormant, high_throughput. REST equivalent: POST /analyze/address (0.25 USDC).

Args:
    address: EVM address (0x...) or Solana address (base58)
ParametersJSON Schema
NameRequiredDescriptionDefault
addressYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it discloses cost ('0.25 USDC'), specific risk flags (e.g., 'new_account', 'unverified_contract'), and auto-detection of address types. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with key information: purpose, usage context, and key features. Every sentence adds value, such as listing specific flags and cost, with no redundant or verbose content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (risk analysis with multiple outputs), the description is complete: it covers purpose, usage, parameters, behavioral traits (cost, flags), and mentions an output schema exists. With annotations and output schema provided, no additional details are needed for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description carries full burden. It clearly explains the 'address' parameter as 'EVM address (0x...) or Solana address (base58)', adding essential format details not in the schema. With only one parameter, this is sufficient for effective use.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: analyze a wallet address (EVM or Solana) to provide entity label, risk level, account age, transaction history, and top holdings. It specifies the verb 'get' and resource 'entity label, risk level, etc.', distinguishing it from sibling tools like market analysis or historical data tools by focusing on address risk assessment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Before you transact, know who you're dealing with.' This provides clear context for usage (pre-transaction risk assessment) and distinguishes it from alternatives by focusing on address analysis rather than market or historical data tools listed as siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

api_infoA
Read-onlyIdempotent
Inspect

REST API access for autonomous agents — pricing, quick start, and migration guide.

Call this when: building a trading bot, deploying an autonomous agent, hitting
the MCP rate limit, or running 24/7 without a human in the loop. The MCP tier
(what you're using now) is free via Smithery, rate-limited to 60 calls/minute
per IP, and good for testing. The REST API is for production: pay per call in
USDC; paid endpoints are rate-limited to 60 calls/minute and 200 calls/hour
per wallet. No API key required.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and behavior. The description adds valuable context beyond this by disclosing rate limits (no rate limits for REST API), authentication needs (no API key required), and pricing details (pay per call in USDC), which are not captured in annotations. There is no contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the main purpose and usage guidelines, followed by additional details. It is appropriately sized with two paragraphs, but some sentences could be more streamlined (e.g., the second paragraph is a bit verbose). Overall, it earns its place without significant waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, rich annotations, and an output schema exists, the description provides sufficient context by explaining when to use it, behavioral traits, and comparisons between MCP and REST API. It covers key aspects like pricing and rate limits, making it complete enough for an informational tool, though it could briefly hint at output content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not discuss parameters, which is appropriate. A baseline of 4 is applied as it compensates adequately for the lack of parameters by focusing on usage and behavioral context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides 'REST API access for autonomous agents' with information about 'pricing, quick start, and migration guide,' which gives a general purpose but lacks a specific verb+resource combination. It doesn't clearly distinguish from sibling tools like 'market_analyze' or 'history_1d,' which appear to be data-focused, making the purpose somewhat vague rather than precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Call this when: building a trading bot, deploying an autonomous agent, hitting rate limits, or running 24/7 without a human in the loop.' It also provides clear alternatives by comparing the MCP tier (free for testing) vs. the REST API (for production), offering explicit guidance on usage context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

market_analyzeA
Read-onlyIdempotent
Inspect

Is macro with you or against you? Get the current regime (bull/bear/risk_on/risk_off/choppy), directional signal and confidence, and macro context (DXY, VIX, fear/greed) before entering a position. Data-only, no LLM latency. coverage disclosed per token. REST equivalent: POST /analyze/market (0.25 USDC).

Args:
    token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
           DOT, ARB, SUI, OP, LTC, AMP, ZEC)
    context: Optional historical context window ('7d' or '30d'). Adds percentile rankings.
ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYes
contextNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies '15 tokens supported' (limitation), 'Data-only, no LLM latency' (performance characteristic), and cost information. While annotations cover safety (readOnly, non-destructive, idempotent), the description provides practical constraints that help the agent understand operational boundaries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a purpose statement, key features, limitations, cost, and parameter details in logical order. Every sentence adds value: the opening question establishes context, technical details are grouped, and parameter explanations are clear without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (market analysis with multiple outputs), the description provides complete context: purpose, usage timing, limitations, cost, parameter details, and distinguishes from siblings. With annotations covering safety and an output schema presumably detailing return values, no additional explanation is needed for behavioral or output aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining both parameters: 'token' gets a comprehensive list of supported symbols, and 'context' explains its optional nature, valid values ('7d' or '30d'), and effect ('Adds percentile rankings'). This provides complete semantic understanding beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get the current regime, directional signal and confidence, and macro context') and resources ('market analysis for token positions'). It distinguishes from siblings by focusing on regime analysis rather than historical data (history_*), order flow (market_orderflow), or risk assessment (address_risk).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('before entering a position') and provides clear alternatives through sibling tool names. It also specifies what it doesn't do ('Data-only, no LLM latency') and includes cost information ('0.25 USDC'), giving comprehensive usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

market_fullA
Read-onlyIdempotent
Inspect

Full pre-trade diligence in one call. Gets all market and orderflow data, then makes a grounded judgment: BULLISH/BEARISH/NEUTRAL stance, ACCUMULATION/DISTRIBUTION signal, LOW/MODERATE/HIGH/CRITICAL risk level, and a verdict that cites actual data values — not vibes. Use when you want a single answer rather than assembling the pieces yourself. coverage disclosed per token. REST equivalent: POST /analyze/full (0.75 USDC).

Args:
    token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
           DOT, ARB, SUI, OP, LTC, AMP, ZEC)
    context: Optional context window ('7d' or '30d').
ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYes
contextNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the tool's output format (BULLISH/BEARISH/NEUTRAL stance, etc.), emphasizes data-driven verdicts ('cites actual data values — not vibes'), mentions token support ('15 tokens supported'), and discloses cost ('0.75 USDC'). While annotations cover safety (readOnly, non-destructive), the description enriches understanding of what the tool actually produces.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: first sentence states the core purpose, subsequent sentences detail outputs and usage guidance, then practical constraints (tokens, cost), and finally parameter explanations. Every sentence adds value with zero wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (comprehensive market analysis), rich annotations (readOnly, openWorld, idempotent), and the presence of an output schema, the description provides excellent context. It explains what the tool does, when to use it, what outputs to expect, practical constraints, and parameter meanings - covering all essential aspects without needing to duplicate structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining both parameters: 'token' is described with a list of 15 supported symbols, and 'context' is explained as 'Optional context window' with specific values ('7d' or '30d'). This adds meaningful semantics beyond the bare schema, though it doesn't fully document all parameter constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Full pre-trade diligence in one call' that 'Gets all market and orderflow data, then makes a grounded judgment' with specific outputs (stance, signal, risk level, verdict). It distinguishes itself from siblings like market_snapshot or market_orderflow by emphasizing comprehensive analysis rather than piecemeal data gathering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use when you want a single answer rather than assembling the pieces yourself.' This provides clear guidance to choose this comprehensive tool over more granular siblings like market_orderflow or history_* tools that provide individual data components.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

market_lightA
Read-onlyIdempotent
Inspect

Structured coverage for ANY listed token — price, momentum, extremes, signals, project info, exchange listings, and optional LLM brief. Broader coverage than market_snapshot (which is limited to 26 supported tokens), shallower data (no orderflow, whale, liquidation, or signal fields). Use for tokens outside the BTC/ETH/SOL/etc. core set. Symbol disambiguation is automatic (top-by-volume match). REST equivalent: POST /data/light (0.05 USDC).

Rate limit: 10 calls / minute per IP (lower than other tools — this fans out to
a paid upstream provider). Cached responses up to 24h are served without
refetching; agents needing fresher data should use the paid REST endpoint.

Args:
    symbol: Token ticker — 1 to 10 alphanumeric characters (e.g. PEPE, WIF,
            FLOKI, BONK). Case-insensitive.
ParametersJSON Schema
NameRequiredDescriptionDefault
briefNofull
symbolYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, idempotentHint, destructiveHint) already indicate safety. Description adds caching behavior, rate limiting rationale (paid upstream), and automatic symbol disambiguation. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise yet thorough: opening sentence defines purpose, then comparison, usage guidance, rate/cache details, and parameter specs. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (covering return data), description covers all necessary context: tool purpose, sibling differentiation, usage caveats, rate/cache behavior, and parameter constraints. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description provides full semantics: character length (1-10), alphanumeric constraint, case-insensitivity, and examples (PEPE, WIF). This compensates entirely for the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool provides structured coverage for ANY listed token, listing specific data categories (price, momentum, etc.). It explicitly differentiates from market_snapshot by noting broader token coverage but shallower data depth.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises use for tokens outside the core set (e.g., BTC/ETH/SOL), compares with market_snapshot, and provides rate limits (10 calls/min) and caching policy (24h) with alternative paid endpoint for fresher data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

market_orderflowA
Read-onlyIdempotent
Inspect

Is this move backed by real buyers, or is it two venues painting tape? See CVD direction, buy/sell ratio, whale clustering, liquidation pressure, and how many of 20 live exchanges are accumulating vs distributing. Volume concentration (HHI > 0.6) flags wash trading or thin manipulation. Data-only. orderflow coverage disclosed per token. REST equivalent: POST /analyze/orderflow (0.50 USDC).

Args:
    token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
           DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE,
           AMP, ZEC)
    context: Optional context window ('7d' or '30d'). Adds percentile rankings.
ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYes
contextNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, open-world, idempotent, and non-destructive traits. The description adds valuable context: it's data-only (no trading), supports 24 tokens, has a REST equivalent with cost (0.50 USDC), and mentions specific analysis metrics like volume concentration (HHI > 0.6) for wash trading detection. This goes beyond annotations by detailing scope and constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose and key metrics, followed by token support and REST details, then parameter explanations. It's appropriately sized with no redundant sentences, though the rhetorical question at the start could be more direct. Each sentence adds value, such as cost and data-only nature.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given annotations cover safety traits, an output schema exists (so return values needn't be explained), and the description adds context on tokens, cost, and analysis metrics, it's fairly complete. However, it could better differentiate from siblings and detail parameter defaults more explicitly, keeping it from a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides a list of 24 supported token symbols and explains that the optional 'context' parameter adds percentile rankings with window options ('7d' or '30d'). This adds meaningful semantics beyond the bare schema, though it doesn't detail parameter interactions or defaults beyond the schema's default for 'context'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes order flow to determine if market moves are backed by real buyers or manipulation, with specific metrics listed (CVD direction, buy/sell ratio, etc.). It distinguishes from siblings by focusing on order flow analysis rather than risk assessment, historical data, or broader market analysis. However, it doesn't explicitly name how it differs from 'market_analyze' or 'market_full'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for analyzing market manipulation vs. genuine buying, with context window options for percentile rankings. It doesn't explicitly state when to use this tool versus alternatives like 'market_analyze' or 'market_full', nor does it provide exclusions or prerequisites. The guidance is contextual but lacks sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

market_snapshotA
Read-onlyIdempotent
Inspect

What's the market doing right now? Price, funding rate, CVD, whale activity, and liquidation pressure in one call — 16 fields, no LLM overhead. Feed directly into your own models or decision logic. orderflow coverage disclosed per token. REST equivalent: POST /data (0.20 USDC).

Args:
    token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
           DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE,
           AMP, ZEC)
ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond this: it discloses cost ('0.20 USDC'), output format (16 fields), supported tokens (24 symbols), and that it's a REST equivalent. This enriches the agent's understanding of practical usage without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: it starts with a purpose-driven question, lists key data points, specifies technical details (fields, tokens, REST equivalent, cost), and ends with clear parameter documentation. Every sentence adds value without redundancy, and it's appropriately front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (aggregating multiple market metrics), rich annotations, and the presence of an output schema, the description is complete. It covers purpose, usage, behavioral context (cost, format), and parameter details, leaving output specifics to the schema. No gaps remain for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, providing no details about the 'token' parameter. The description fully compensates by listing all 24 supported token symbols with examples, clearly defining the parameter's semantics and valid values, which is essential for correct tool invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to provide a comprehensive market snapshot including price, funding rate, CVD, whale activity, and liquidation pressure. It specifies the scope (16 fields, 24 tokens supported) and distinguishes it from siblings by emphasizing 'no LLM overhead' and direct model feeding, unlike analysis-focused siblings like market_analyze.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for real-time market data aggregation without analysis overhead. It implies alternatives by mentioning 'no LLM overhead' (suggesting market_analyze might include that) and 'feed directly into your own models' (differentiating from tools that process data internally). However, it doesn't explicitly name when-not-to-use cases or list all sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources