Gate Info MCP
Server Details
Gate info MCP for coin discovery, market snapshots, technical analysis, and on-chain data.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- gate/gate-mcp
- GitHub Stars
- 27
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 29 of 29 tools scored. Lowest: 3.1/5.
Tools are generally well-differentiated by resource type and action, with clear cross-references in descriptions to guide selection. However, some overlap exists between marketdetail_get_kline and markettrend_get_kline, which could cause confusion about when to use each, and between various platform metrics tools that might have subtle distinctions.
Tool names follow a highly consistent pattern of category_verb_noun (e.g., info_coin_get_coin_info, info_macro_get_economic_calendar). This structured naming makes it easy to understand the tool's domain and purpose at a glance, with no deviations in style.
With 29 tools, the count is on the high side for a single server, potentially overwhelming for agents. While the tools cover a broad crypto/defi domain, it might be better split into sub-servers for better focus, as the scope feels heavy but not extreme.
The tool set provides comprehensive coverage across crypto information domains: coin data, market details, on-chain analysis, macroeconomics, and platform metrics. Each area includes CRUD-like operations (e.g., get, search, batch) with clear cross-references, ensuring agents can navigate workflows without dead ends.
Available Tools
29 toolsinfo_coin_get_coin_infoADestructiveInspect
Single-target lookup: ticker, name, or contract→rows. Chain fixes address clashes. Filtered lists→search_coins. Boards→get_coin_rankings.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Max rows; default 3, max 20. | |
| chain | No | Chain hint for address disambiguation when query_type=address or auto-detected EVM address; e.g. eth, tron, bsc. Same normalization as search_coins. | |
| query | Yes | Search text: symbol, contract address, localized name, etc. | |
| scope | No | basic (default) | detailed | full; with_project/with_tokenomics align with full _source. | |
| fields | No | Field allowlist; empty = all. scope wins when both set. | |
| query_type | No | auto (default) | address | symbol | name | project | gate_symbol | source_id |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | No | |
| count | Yes | |
| items | Yes | |
| query | Yes | |
| scope | No | |
| total | Yes | |
| query_type | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false, covering key behavioral traits. The description adds context about 'Chain fixes address clashes', which clarifies a specific behavior not in annotations. However, it doesn't elaborate on the destructive nature or open-world implications beyond what annotations state.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with four short phrases, front-loading the main purpose. However, the use of symbols like '→' and fragmented phrasing ('Chain fixes address clashes') slightly reduces clarity, though it remains efficient with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, destructive hint) and the presence of an output schema, the description is reasonably complete. It covers purpose, differentiation from siblings, and a key behavioral note. With annotations and output schema handling safety and return values, the description focuses on contextual guidance effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description mentions 'ticker, name, or contract→rows' and 'Chain fixes address clashes', which loosely relate to 'query' and 'chain' parameters but don't add significant semantic value beyond the schema. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a 'single-target lookup' for coins using ticker, name, or contract address, distinguishing it from filtered lists (search_coins) and boards (get_coin_rankings). However, it doesn't explicitly mention the verb 'retrieve' or 'fetch', and the phrase '→rows' is somewhat cryptic, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Filtered lists→search_coins. Boards→get_coin_rankings.' It also implies usage for single-target lookups rather than batch operations, making it clear when to choose this tool over siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_coin_get_coin_rankingsADestructiveInspect
Boards: popular, gainers/losers, twitter_hot, airdrop, new_listing. time_range only for gainers/losers. Single match→get_coin_info. Filtered list→search_coins.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Rows; default 20, max 400. | |
| time_range | No | Only for top_gainers|top_losers: 1h|24h (default)|7d. Omit for other ranking_type or validation fails. | |
| listing_from | No | new_listing only: Unix seconds from; 0 = omit. | |
| ranking_type | Yes | popular|top_gainers|top_losers|twitter_hot|airdrop|new_listing | |
| listing_query | No | new_listing only: exchange / keyword for upstream query. | |
| listing_tickers | No | new_listing only: comma tickers e.g. BTC,ETH. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| total | Yes | |
| time_range | Yes | |
| duration_ms | Yes | |
| ranking_type | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide significant behavioral information (readOnlyHint=false, destructiveHint=true, etc.), so the bar is lower. The description adds useful context about parameter constraints (time_range only for specific ranking types) and clarifies the relationship with sibling tools. However, it doesn't explain what 'destructiveHint: true' means in this context or provide additional behavioral details like rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and well-structured with zero wasted words. The first sentence lists all ranking_type options, the second specifies parameter constraints, and the third provides clear sibling tool guidance. Every sentence earns its place and the information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (6 parameters, 1 required), rich schema coverage (100%), and the presence of an output schema, the description is quite complete. It covers the core purpose, parameter constraints, and sibling tool relationships. The only minor gap is not explaining what the destructiveHint annotation means in this ranking context, but the output schema reduces the need to describe return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds meaningful context by clarifying that 'time_range only for gainers/losers' and providing guidance on when to omit parameters. It also explains the relationship between ranking_type values and parameter applicability, which goes beyond what the schema descriptions provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides different ranking boards (popular, gainers/losers, etc.) for coins, which is a specific verb+resource combination. However, it doesn't explicitly differentiate from its closest sibling 'info_coin_search_coins' beyond mentioning 'Filtered list→search_coins' - it could more clearly explain that this tool provides pre-defined rankings while search_coins allows custom filtering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent usage guidance with explicit alternatives: 'Single match→get_coin_info' and 'Filtered list→search_coins'. It also specifies when to use the time_range parameter ('only for gainers/losers') and when to omit it ('Omit for other ranking_type'). This gives clear context for when to choose this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_coin_search_coinsADestructiveInspect
Multi-row asset list by category, chain, cap, type. One clear match→get_coin_info. Boards→get_coin_rankings.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain filter: canonical name or alias (ETH→Ethereum, BSC→BNB Chain); contains match on normalized chain list. | |
| limit | No | Page size; default 20, max 400. | |
| offset | No | Offset; default 0. | |
| sort_by | No | market_cap (default) | fdv | circulating_supply; all desc | |
| category | No | Controlled sector tag or group (e.g. Layer 1, DEX); case-insensitive; matches category_type_flat then category_type; unknown values return empty list. | |
| asset_type | No | crypto (default) | tradefi | all | |
| market_cap_max | No | Max market cap USD. | |
| market_cap_min | No | Min market cap USD; range on market_value / market_cap. |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | No | |
| count | Yes | |
| items | Yes | |
| limit | No | |
| total | Yes | |
| offset | No | |
| sort_by | No | |
| category | No | |
| asset_type | No | |
| duration_ms | Yes | |
| total_count | Yes | |
| market_cap_max | No | |
| market_cap_min | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive, non-idempotent, open-world operation, but the description adds valuable context by specifying it returns a 'multi-row asset list' and clarifies the relationship with sibling tools. It doesn't contradict annotations and provides additional behavioral insight about the tool's output format and use case context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with only two short sentences that each serve a clear purpose: the first defines the tool's core function, the second provides sibling tool guidance. There is zero wasted verbiage and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has comprehensive schema coverage (100%), annotations covering key behavioral aspects, and an output schema exists, the description provides adequate context. It covers the core purpose and sibling relationships, though it could benefit from more explicit differentiation between this tool and 'get_coin_rankings' to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already comprehensively documents all 8 parameters with their types, constraints, and behaviors. The description mentions filtering by 'category, chain, cap, type' which aligns with parameters but adds no additional semantic value beyond what's already in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides a 'multi-row asset list' filtered by specific criteria (category, chain, cap, type), which is a specific verb+resource combination. It distinguishes from sibling 'get_coin_info' by indicating that tool is for single matches, but doesn't explicitly differentiate from 'get_coin_rankings' beyond mentioning 'Boards→get_coin_rankings' without clarifying the distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the alternative 'get_coin_info' ('One clear match→get_coin_info') and mentions 'get_coin_rankings' for 'Boards', though it doesn't fully explain what 'Boards' means or when to choose between these tools. No explicit when-not-to-use guidance is provided for other potential alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_compliance_check_token_securityADestructiveInspect
Honeypot, tax, and holder risk screen for a token. On-chain activity stats→get_token_onchain. Token xor address with chain; scope in schema.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | Locale: en|cn|tw|ja|kr; default en. | |
| chain | Yes | Chain id e.g. eth, bsc, solana, base, arb. | |
| scope | No | basic (default)|full. | |
| token | No | Token symbol e.g. PEPE; xor with address. | |
| address | No | Contract address; xor with token. |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | Yes | |
| count | Yes | |
| scope | No | |
| token | No | |
| total | Yes | |
| address | Yes | |
| buy_tax | No | |
| holders | No | |
| sell_tax | No | |
| name_risk | No | |
| risk_facts | No | |
| duration_ms | Yes | |
| is_honeypot | Yes | |
| holder_count | No | |
| risk_summary | Yes | |
| tax_analysis | No | |
| data_analysis | No | |
| low_risk_list | No | |
| top10_percent | No | |
| high_risk_list | No | |
| is_open_source | Yes | |
| middle_risk_list | No | |
| dev_holding_percent | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint: true, readOnlyHint: false, etc., covering key behavioral traits. The description adds context by specifying it's a 'risk screen,' which aligns with destructive implications (e.g., potential data modification or alerts). No contradiction with annotations, but it doesn't elaborate on rate limits, auth needs, or specific destructive actions beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose. It uses two sentences efficiently, with no wasted words. However, the phrasing 'Token xor address with chain; scope in schema' is slightly cryptic, slightly reducing clarity but not significantly impacting conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (risk screening with destructive hints) and the presence of annotations and an output schema, the description is reasonably complete. It covers purpose, distinguishes from a sibling, and hints at parameter usage. With output schema handling return values, the description doesn't need to explain outputs, making it adequate for context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal value by hinting at 'xor' relationships between token and address and referencing 'scope in schema,' but doesn't provide new semantic details beyond the schema. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a 'honeypot, tax, and holder risk screen for a token,' specifying the type of analysis. It distinguishes from sibling 'get_token_onchain' by noting that tool is for 'on-chain activity stats.' However, it doesn't fully differentiate from other compliance or security tools in the sibling list, which is why it's not a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage guidance by mentioning 'Token xor address with chain; scope in schema,' implying parameters are exclusive and referencing the schema for details. It also distinguishes from 'get_token_onchain' for activity stats. However, it lacks explicit when-not-to-use scenarios or clear alternatives among siblings, keeping it at an implied level.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_macro_get_economic_calendarADestructiveInspect
Filtered release calendar in a date window. Zero-arg dashboard→get_macro_summary. Indicator snapshot or series→get_macro_indicator.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Rows; default 20, max 400. | |
| end_date | No | End YYYY-MM-DD inclusive; default start+30d; sorted by event_date desc. | |
| event_type | No | FOMC|NFP|CPI|PPI|GDP|PCE|...; empty or all = no filter. | |
| importance | No | Importance filter; ignored if index lacks field. | |
| start_date | No | Start YYYY-MM-DD for event_date; default today UTC; if only end_date set, start defaults today. |
Output Schema
| Name | Required | Description |
|---|---|---|
| size | No | |
| count | Yes | |
| items | Yes | |
| total | Yes | |
| end_date | No | |
| event_type | No | |
| importance | No | |
| start_date | No | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide key behavioral hints (e.g., readOnlyHint: false, destructiveHint: true), but the description adds context by specifying the tool's scope ('filtered release calendar in a date window') and alternative tools. It doesn't disclose additional behavioral traits like rate limits or auth needs beyond what annotations cover, but it doesn't contradict them either, so it adds some value without being comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, consisting of two sentences that efficiently convey the tool's purpose and usage guidelines. Every sentence earns its place by providing essential information without redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, annotations, and an output schema), the description is fairly complete. It covers purpose and usage guidelines, and with annotations and output schema handling behavioral and return aspects, it doesn't need to explain everything. However, it could be slightly more detailed on scope or limitations to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so parameters are well-documented in the schema. The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining 'event_type' values or 'importance' usage. With high schema coverage, the baseline is 3, as the description doesn't compensate but doesn't need to heavily.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as providing a 'filtered release calendar in a date window,' which specifies the verb ('filtered release calendar') and resource ('economic calendar'). It distinguishes from some siblings like 'get_macro_summary' and 'get_macro_indicator' by mentioning them as alternatives, though it doesn't explicitly differentiate from all other macro tools in the sibling list. This makes it clear but not fully specific to all siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidelines by stating when to use this tool versus alternatives: 'Zero-arg dashboard→get_macro_summary. Indicator snapshot or series→get_macro_indicator.' This gives clear context for when to choose this tool over specific siblings, though it doesn't cover all possible exclusions or other siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_macro_get_macro_indicatorADestructiveInspect
Official macro stats: latest or series by country/indicator (CPI, rates, jobs). No event feed. Calendar→get_economic_calendar. Dashboard→get_macro_summary.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | latest (default, snapshot) or timeseries. | |
| size | No | Rows; default 20, max 400. | |
| country | No | Country display name e.g. United States; wins over country_code if both set. | |
| end_date | No | Same as end_time. | |
| end_time | No | Range end; alias end_date. | |
| indicator | Yes | Exact keyword match on indicator.keyword OR indicator_id.keyword OR slugname.keyword; ASCII case-insensitive via exact variants (original, lower, upper). | |
| start_date | No | Same as start_time. | |
| start_time | No | Range start; alias start_date; filters date/part_date. | |
| country_code | No | ISO code e.g. US when index lacks display name. |
Output Schema
| Name | Required | Description |
|---|---|---|
| mode | Yes | |
| note | No | Optional empty-result or fallback hint. |
| size | No | |
| count | Yes | |
| total | Yes | |
| latest | No | |
| country | No | |
| end_date | No | |
| end_time | No | |
| indicator | Yes | |
| start_date | No | |
| start_time | No | |
| timeseries | No | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description adds no behavioral context beyond what annotations already declare (e.g., no rate limits, auth needs, or what 'destructive' entails). However, it doesn't contradict annotations, so it meets the lower bar with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded: the first sentence states the core purpose, and subsequent sentences efficiently clarify exclusions and alternatives. Every sentence earns its place with zero waste, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (9 parameters, destructive hint) and rich schema/annotations/output schema, the description is mostly complete. It clarifies purpose and sibling differentiation but lacks behavioral details like what 'destructive' means or output specifics. However, with output schema present, explaining return values isn't needed, keeping it adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed parameter documentation. The description adds no parameter semantics beyond what's in the schema (e.g., no clarification on indicator matching or date formats). With high schema coverage, the baseline is 3, as the description doesn't compensate but doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('get') and resources ('macro stats: latest or series by country/indicator'), listing examples like CPI, rates, jobs. It explicitly distinguishes from siblings by stating what it doesn't do ('No event feed') and naming alternatives ('Calendar→get_economic_calendar. Dashboard→get_macro_summary').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs alternatives: it states 'No event feed' and directs users to 'Calendar→get_economic_calendar' for calendar data and 'Dashboard→get_macro_summary' for dashboard summaries. This clearly defines the tool's scope relative to its siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_macro_get_macro_summaryADestructiveInspect
Zero-arg macro dashboard: key snapshots plus upcoming releases. Filtered calendar list→get_economic_calendar. Indicator snapshot or series→get_macro_indicator.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| total | Yes | |
| indicators | Yes | |
| duration_ms | Yes | |
| next_events | Yes | |
| snapshot_time | Yes | |
| indicators_total | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds some behavioral context beyond annotations by specifying this is a 'dashboard' that provides 'key snapshots plus upcoming releases.' However, it doesn't address the destructiveHint=true annotation (which suggests potential data modification) or explain what 'destructive' means in this context. The annotations already provide readOnlyHint=false, openWorldHint=true, idempotentHint=false, and destructiveHint=true, so the description adds limited value beyond these structured signals.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and well-structured. The first sentence establishes the core purpose, and the subsequent sentences provide clear differentiation from sibling tools. Every sentence earns its place with no wasted words, making it easy for an AI agent to parse and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, rich annotations, and an output schema exists, the description provides adequate context. It clearly explains what the tool does and when to use it versus alternatives. The main gap is that it doesn't address the destructiveHint=true annotation, which could be important context for an AI agent deciding whether to invoke this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description reinforces this by explicitly calling it a 'Zero-arg macro dashboard,' confirming no parameters are needed. This adds clarity beyond what the empty input schema alone communicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'Zero-arg macro dashboard: key snapshots plus upcoming releases.' It distinguishes from siblings by explicitly naming alternatives: 'Filtered calendar list→get_economic_calendar. Indicator snapshot or series→get_macro_indicator.' This provides precise differentiation from other tools on the server.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It states this is for 'key snapshots plus upcoming releases' while directing users to 'get_economic_calendar' for filtered calendar lists and 'get_macro_indicator' for indicator snapshots or series. This creates clear boundaries between sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_marketdetail_get_klineADestructiveInspect
Exchange pair or contract candles, including fine intervals such as 1m or 5s; needs Gate base URL. Trend-index OHLCV or indicator bundle→markettrend_get_kline.
| Name | Required | Description | Default |
|---|---|---|---|
| extra | No | Passthrough e.g. interval overrides. | |
| limit | No | Max candles; default 100, max 400; without range = latest N desc. | |
| settle | No | futures/delivery settlement; default usdt. | |
| symbol | Yes | Pair or contract. | |
| end_time | No | End Unix seconds. | |
| timeframe | Yes | Candle interval e.g. 1s, 1m, 5m. | |
| start_time | No | Start Unix seconds; with end_time for range; omit = last limit candles desc. | |
| market_type | No | spot (default)|futures|delivery|options |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| total | Yes | |
| settle | No | |
| symbol | Yes | Echo input. Candle times are UTC (+00:00); convert for local display. |
| cex_tool | Yes | |
| timeframe | Yes | |
| duration_ms | Yes | |
| market_type | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds some behavioral context beyond annotations by mentioning 'needs Gate base URL' (implying external dependency) and distinguishing from the trend-focused sibling. However, annotations already provide critical information (destructiveHint: true, readOnlyHint: false, etc.), so the description doesn't need to fully cover behavioral traits. It doesn't contradict annotations, but could add more about what 'destructive' means in this context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact (two sentences) and front-loaded with the core purpose. The second sentence efficiently covers both the dependency requirement and sibling distinction. While slightly technical in phrasing, every element serves a purpose with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, destructive operation), the description provides adequate context when combined with the rich schema (100% coverage) and annotations. The presence of an output schema means the description doesn't need to explain return values. It covers purpose, dependency, and sibling differentiation sufficiently for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 8 parameters thoroughly. The description mentions 'fine intervals such as 1m or 5s' which aligns with the timeframe parameter, but doesn't add significant semantic value beyond what's in the schema descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'Exchange pair or contract candles' with 'fine intervals', which is a specific verb+resource combination. It distinguishes from sibling 'info_markettrend_get_kline' by mentioning that tool is for 'Trend-index OHLCV or indicator bundle', though the distinction could be more explicit about what makes them different.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (for candles with fine intervals) and explicitly names an alternative ('markettrend_get_kline'). However, it doesn't specify when NOT to use this tool or mention other potential alternatives among the many sibling tools, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_marketdetail_get_orderbookADestructiveInspect
Order book depth: bids/asks ladder from Gate; needs base URL. Trade tape→get_recent_trades. Trend-index bars→markettrend_get_kline.
| Name | Required | Description | Default |
|---|---|---|---|
| depth | No | Order book depth; default 20, max 100. | |
| extra | No | Extra Gate API params merged into request. | |
| settle | No | Settlement for futures/delivery; default usdt; omit for options. | |
| symbol | Yes | Pair e.g. BTC_USDT. | |
| market_type | No | spot (default)|futures|delivery|options. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| depth | Yes | |
| items | Yes | |
| total | Yes | |
| settle | No | |
| symbol | Yes | Echo input. Item timestamps are UTC (+00:00); convert for local display. |
| cex_tool | Yes | |
| duration_ms | Yes | |
| market_type | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies the data source ('from Gate') and mentions a prerequisite ('needs base URL'). Annotations already indicate destructiveHint=true and readOnlyHint=false, which the description doesn't contradict. However, it doesn't elaborate on rate limits, authentication needs, or what 'destructive' entails in this context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: the first phrase states the core purpose, followed by prerequisite and sibling differentiation. Every sentence earns its place with zero wasted words, making it highly efficient for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (including destructiveHint), 100% schema coverage, and existence of an output schema, the description is complete enough. It covers purpose, prerequisites, and sibling differentiation while letting structured fields handle parameters, behavior, and return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description doesn't add any parameter-specific information beyond what's in the schema. This meets the baseline of 3 since the schema carries the full parameter documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'Order book depth: bids/asks ladder from Gate.' It distinguishes from siblings by explicitly naming alternatives: 'Trade tape→get_recent_trades. Trend-index bars→markettrend_get_kline.' This provides precise differentiation from other market detail tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Trade tape→get_recent_trades. Trend-index bars→markettrend_get_kline.' It also mentions a prerequisite: 'needs base URL.' This gives clear context for selection among sibling tools with specific exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_marketdetail_get_recent_tradesBDestructiveInspect
Recent trade prints tape from Gate; needs base URL. Depth ladder→get_orderbook. Trend-index bars→markettrend_get_kline.
| Name | Required | Description | Default |
|---|---|---|---|
| extra | No | Optional Gate API passthrough. | |
| limit | No | Recent trades count; default 50, max 400. | |
| settle | No | futures/delivery settlement; default usdt. | |
| symbol | Yes | Pair or contract id. | |
| market_type | No | spot (default)|futures|delivery|options |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| limit | Yes | |
| total | Yes | |
| settle | No | |
| symbol | Yes | Echo input. Trade times are UTC (+00:00); convert for local display. |
| cex_tool | Yes | |
| duration_ms | Yes | |
| market_type | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a potentially unsafe operation, but the description doesn't explain what gets destroyed or any behavioral traits like rate limits or authentication needs. It adds minimal context beyond annotations, such as the need for a base URL, but lacks details on safety or side effects. No contradiction with annotations is present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core purpose and alternatives in two sentences. It avoids unnecessary details, but the phrase 'prints tape' is jargon that may reduce clarity. Overall, it's efficient but could be slightly more accessible.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, annotations, and full schema coverage, the description provides adequate context by stating the purpose and alternatives. It covers basic usage but lacks details on behavioral aspects like destruction implications or error handling, which are partially mitigated by structured data. For a data retrieval tool with good annotations, this is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema fully documents all parameters. The description adds no additional meaning about parameters beyond implying 'needs base URL', which isn't reflected in the schema. With high schema coverage, the baseline is 3, as the description doesn't compensate but doesn't detract either.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'recent trade prints tape from Gate', which indicates it fetches trade data. However, it's vague about what 'prints tape' means and doesn't clearly differentiate from siblings like 'info_marketdetail_get_orderbook' or 'info_markettrend_get_kline' beyond mentioning them as alternatives. The purpose is understandable but lacks specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by explicitly naming alternatives: 'Depth ladder→get_orderbook' and 'Trend-index bars→markettrend_get_kline'. This helps distinguish when to use this tool vs. others for different data types. However, it doesn't specify when-not-to-use scenarios or prerequisites beyond 'needs base URL', which is implied but not detailed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_marketsnapshot_batch_market_snapshotADestructiveInspect
Batch quotes up to 20 symbols; partial misses OK. One symbol→get_market_snapshot. Deep trend→markettrend_get_kline. Breadth→get_market_overview.
| Name | Required | Description | Default |
|---|---|---|---|
| quote | No | Quote currency; default USDT; maps to OpenSearch market. | |
| scope | No | Response scope. | |
| source | No | spot (default)|futures (future)|alpha|fx. | |
| symbols | Yes | Symbol list, max 20. Missing symbols yield empty rows; the batch still succeeds. | |
| timeframe | No | Kline: 15m|1h|4h|1d; default 1h. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| quote | No | |
| scope | No | |
| total | Yes | |
| source | No | |
| symbols | No | |
| not_found | Yes | |
| timeframe | No | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: 'partial misses OK' and 'Missing symbols yield empty rows; the batch still succeeds' explain error tolerance. Annotations provide safety profile (destructiveHint: true, etc.), but the description enhances understanding of batch behavior. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and well-structured: three short sentences that each earn their place. The first sentence states the core functionality, the second addresses error handling, and the third provides sibling differentiation. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has comprehensive annotations (readOnlyHint, destructiveHint, etc.), 100% schema description coverage, and an output schema exists, the description provides complete contextual understanding. It covers purpose, usage guidelines, behavioral nuances, and sibling differentiation without needing to explain parameters or return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description mentions 'up to 20 symbols' which aligns with the schema's max constraint but doesn't add significant semantic value beyond what's in the parameter descriptions. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Batch quotes up to 20 symbols' with the verb 'quotes' and resource 'symbols'. It distinguishes from siblings by explicitly naming alternatives: 'One symbol→get_market_snapshot. Deep trend→markettrend_get_kline. Breadth→get_market_overview.' This provides specific differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'Batch quotes up to 20 symbols' indicates when to use this tool, while 'One symbol→get_market_snapshot. Deep trend→markettrend_get_kline. Breadth→get_market_overview.' clearly states alternatives for different scenarios. This gives clear when/when-not guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_marketsnapshot_get_market_overviewADestructiveInspect
Broad market dashboard: cap, volume, dominance, and sentiment; no stablecoin ranking. Stablecoin-only ranking or chain breakdown→get_stablecoin_info. One asset→get_market_snapshot. Many symbols→batch_market_snapshot.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ahr999 | Yes | |
| btc_price | Yes | |
| updated_at | Yes | |
| duration_ms | Yes | |
| btc_dominance | Yes | |
| eth_dominance | Yes | |
| fear_greed_index | Yes | |
| fear_greed_label | Yes | |
| total_market_cap | Yes | USD total market cap; null when that upstream slice fails. Each field may be null independently. |
| total_volume_24h | Yes | |
| active_coins_count | Yes | |
| altcoin_season_index | Yes | |
| market_cap_change_24h | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context about what data is included/excluded (no stablecoin ranking) that goes beyond the annotations. While annotations provide safety information (destructiveHint: true, readOnlyHint: false), the description clarifies the tool's scope and limitations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise: first sentence states purpose and scope, subsequent sentences provide clear usage guidelines and alternatives. Every sentence earns its place with zero wasted words, and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, rich annotations, an output schema, and the description provides excellent purpose clarity, usage guidelines, and behavioral context, this description is complete. The agent has everything needed to understand when and how to use this tool versus alternatives.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description adds value by implicitly confirming there are no required parameters for this dashboard view and providing context about what data is included, which helps the agent understand this is a broad overview tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'Broad market dashboard' that provides 'cap, volume, dominance, and sentiment'. It explicitly distinguishes from siblings by stating what it doesn't include ('no stablecoin ranking') and naming alternatives for specific use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent usage guidance with explicit when-not-to-use statements ('no stablecoin ranking') and clear alternatives for three specific scenarios: 'Stablecoin-only ranking or chain breakdown→get_stablecoin_info', 'One asset→get_market_snapshot', and 'Many symbols→batch_market_snapshot'. This gives the agent perfect context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_marketsnapshot_get_market_snapshotBDestructiveInspect
One-symbol: price, kline clip, project. Trend→markettrend_get_kline. Gate→marketdetail_get_kline. Many→batch_market_snapshot. Breadth→get_market_overview.
| Name | Required | Description | Default |
|---|---|---|---|
| quote | No | Quote currency mapped to OpenSearch market; default USDT. | |
| scope | No | basic (default, trimmed)|detailed|full. | |
| source | No | alpha|spot|futures (future)|fx; default spot. | |
| symbol | Yes | Symbol e.g. BTC, ETH, USDC. | |
| timeframe | No | Kline: 15m|1h|4h|1d; wins over indicator_timeframe when both set; default 1h if both empty. | |
| indicator_timeframe | No | Alias of timeframe when timeframe omitted. |
Output Schema
| Name | Required | Description |
|---|---|---|
| kline | Yes | |
| quote | No | |
| scope | No | |
| source | Yes | |
| symbol | Yes | |
| realtime | Yes | |
| timeframe | Yes | |
| duration_ms | Yes | |
| project_info | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting this is a write operation, but the description doesn't explain what gets destroyed or modified. The description adds context about data scope (single symbol) and references to other tools, but doesn't disclose behavioral traits like rate limits, authentication needs, or what 'destructive' means in this context. With annotations covering basic safety hints, the description adds some value but lacks critical behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely terse and uses cryptic notation (arrows, abbreviations) that requires interpretation. While brief, it's not well-structured or clear - it reads like shorthand notes rather than a proper description. The first part 'One-symbol: price, kline clip, project' is particularly opaque.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations, 100% schema coverage, and an output schema, the description provides adequate context about when to use this versus alternatives. However, for a tool with destructiveHint=true, the description should explain what destruction occurs. The cryptic notation reduces completeness despite the presence of structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions 'kline clip' which relates to the timeframe parameter, but doesn't provide additional semantics. Baseline 3 is appropriate when schema does all the parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses cryptic shorthand ('One-symbol: price, kline clip, project') rather than a clear verb+resource statement. It mentions some data elements but doesn't explicitly state what the tool does (e.g., 'Retrieves market snapshot data for a single symbol'). The purpose is implied but not clearly articulated, falling into vague territory.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit alternatives for different use cases: 'Trend→markettrend_get_kline' for trend analysis, 'Gate→marketdetail_get_kline' for detailed kline data, 'Many→batch_market_snapshot' for multiple symbols, and 'Breadth→get_market_overview' for broader market data. This clearly indicates when to use this tool versus siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_markettrend_get_indicator_historyADestructiveInspect
Historical indicator columns (RSI, MACD, MAs). OHLCV→get_kline. Synthesis→get_technical_analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Points; default 50, max 400. | |
| symbol | Yes | Symbol. | |
| end_time | No | Range end. | |
| timeframe | Yes | 15m|1h|4h|1d. | |
| indicators | Yes | ES field names e.g. rsi, macd, macd_dea, ma7, ema7, ma30, boll_upper_band, signal_value_k, profit_rate_stddev_7d. | |
| start_time | No | Range start: ms or ISO8601. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| total | Yes | |
| symbol | Yes | |
| timeframe | Yes | |
| indicators | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide significant behavioral information (destructiveHint: true, readOnlyHint: false, etc.), so the bar is lower. The description adds context about what type of data is returned (historical indicator columns) and references sibling tools, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or what 'destructive' means in this context. It doesn't contradict annotations, but adds minimal value beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that are front-loaded with the core purpose. Every word earns its place: the first sentence defines the tool's function, and the second provides crucial usage guidance. There's zero wasted text or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (6 parameters, destructive operation), the description is reasonably complete. It clarifies the tool's purpose, distinguishes it from siblings, and the existence of an output schema means return values don't need explanation. However, it could better address the destructive nature hinted in annotations or provide more context about data formats.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing good documentation for all parameters. The description adds some semantic context by listing example indicator names (e.g., rsi, macd) in the 'indicators' array, which helps clarify what values are expected beyond the schema's generic description. However, it doesn't explain parameter interactions or provide additional meaning for other parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'Historical indicator columns' and lists specific examples (RSI, MACD, MAs), which provides a specific verb+resource combination. It distinguishes from sibling 'info_markettrend_get_kline' by mentioning 'OHLCV→get_kline' and from 'info_markettrend_get_technical_analysis' by noting 'Synthesis→get_technical_analysis', though the distinction could be more explicit about what 'synthesis' entails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidance by stating when to use alternative tools: 'OHLCV→get_kline' for price/volume data and 'Synthesis→get_technical_analysis' for synthesized analysis. This clearly defines the scope of this tool versus its siblings, helping the agent select the correct tool for historical indicator data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_markettrend_get_klineADestructiveInspect
Trend-index OHLCV for symbols, with optional indicator bundle. Exchange pair or contract candles, or fine intervals→marketdetail_get_kline. Columns→get_indicator_history. Signals→get_technical_analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Row count; alias of limit; default 100, max 400. | |
| limit | No | Row count; default 100, max 400; alternative to size. | |
| period | No | Window: 1h|4h|24h|7d|3d|5d|10d; default 24h. Ignored when start/end set. | |
| symbol | Yes | Symbol e.g. BTC, ETH. | |
| end_time | No | Absolute end. | |
| timeframe | Yes | Candle interval: 1m|5m|15m|1h|4h|1d. | |
| start_time | No | Absolute start: ms timestamp or ISO8601; wins over period when paired with end. | |
| with_indicators | No | If true, include technical columns in _source; default OHLCV only. |
Output Schema
| Name | Required | Description |
|---|---|---|
| size | Yes | |
| count | Yes | |
| items | Yes | |
| total | Yes | |
| period | Yes | |
| symbol | Yes | |
| timeframe | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description doesn't contradict these but adds minimal behavioral context beyond them—it mentions the optional indicator bundle but doesn't explain what 'destructive' means in this context (e.g., rate limits, data usage) or the implications of openWorldHint. With annotations covering core traits, the description adds some value but lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact (three short phrases) and front-loaded with the core purpose. Every sentence earns its place by providing purpose and alternatives, though the phrasing is somewhat cryptic (e.g., 'Columns→get_indicator_history') and could be more readable. It avoids redundancy but sacrifices clarity for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, destructive hint) and the presence of an output schema, the description is reasonably complete. It covers purpose, distinguishes from key siblings, and references related tools. However, it lacks context on what 'trend-index' means versus regular data, and with destructiveHint=true, more guidance on usage constraints would be helpful despite the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no specific parameter semantics beyond implying 'symbol' and 'timeframe' are required and mentioning 'optional indicator bundle' (which maps to 'with_indicators'). This meets the baseline of 3 since the schema does the heavy lifting, but the description doesn't enhance understanding of parameter interactions or use cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'Trend-index OHLCV for symbols' with 'optional indicator bundle', providing specific verb (get/retrieve) and resource (OHLCV data). It distinguishes from sibling 'info_marketdetail_get_kline' by noting this is for 'exchange pair or contract candles' while that sibling is for 'fine intervals', though the distinction could be more explicit about what 'trend-index' versus 'marketdetail' means.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-not-to-use guidance by stating 'fine intervals→marketdetail_get_kline' and alternative tools for specific needs: 'Columns→get_indicator_history' and 'Signals→get_technical_analysis'. This gives clear context for selecting among sibling tools, though it doesn't mention when to prefer this over other market data tools like snapshot tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_markettrend_get_technical_analysisADestructiveInspect
Multi-timeframe bull/bear/neutral read from chart OHLCV only. OHLCV→get_kline. Raw columns→get_indicator_history.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Window: 1h|4h|24h|7d|3d|5d|10d|all; default 3d. Absolute start/end win when parseable. | |
| symbol | Yes | Symbol. | |
| end_time | No | Absolute end: ms or RFC3339. | |
| start_time | No | Absolute start: ms or RFC3339. |
Output Schema
| Name | Required | Description |
|---|---|---|
| period | No | Echo relative window; empty when start/end used. |
| signal | Yes | Aggregate: bullish|bearish|neutral. |
| symbol | Yes | |
| end_time | No | |
| start_time | No | |
| timeframes | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies that the analysis is based on 'OHLCV only' and references sibling tools. Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a write operation, but the description doesn't contradict this. However, it doesn't detail rate limits, auth needs, or specific destructive effects, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: the first sentence states the core purpose, and the second provides critical usage guidance. Every sentence earns its place with no wasted words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity, rich annotations (e.g., destructiveHint=true), and the presence of an output schema, the description is complete enough. It clarifies the tool's scope and sibling relationships, and with the output schema, it doesn't need to explain return values. The description effectively complements the structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description doesn't add any parameter-specific information beyond what's in the schema. It implies timeframe usage through 'Multi-timeframe' but doesn't elaborate on parameter interactions. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Multi-timeframe bull/bear/neutral read from chart OHLCV only.' It specifies the verb ('read'), resource ('chart OHLCV'), and scope ('multi-timeframe'), and distinguishes from siblings by mentioning 'OHLCV→get_kline' and 'Raw columns→get_indicator_history', which are specific sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance by stating when to use this tool ('OHLCV only') and when to use alternatives ('OHLCV→get_kline' and 'Raw columns→get_indicator_history'). This clearly differentiates it from sibling tools like info_markettrend_get_kline and info_markettrend_get_indicator_history.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_onchain_get_address_infoADestructiveInspect
Wallet profile: labels, balances, risk, and scope-specific summary. Paginated transfer list with time or value filters→get_address_transactions.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain id e.g. eth, trx, bsc, btc, sol, base, arb. Omit to auto-detect from labels; default eth if unknown. | |
| scope | No | basic (default) | with_defi | with_counterparties | with_pnl | full. full merges all extensions; empty sections may be [] or null. detailed maps to full. | |
| address | Yes | On-chain address. | |
| min_value_usd | No | Min token balance USD; keep rows with parsed value_usd >= threshold. 0 keeps non-negative; unset/unparseable value_usd dropped. | |
| upstream_raw_mode | No | off (default, no page raw) | lite (per-item source_raw where applicable) | full (page raw). include_upstream_raw=true forces full. | |
| include_upstream_raw | No | If true, include New Explorer ApiResponse.data as new_explorer_*_raw; large payload. Default false. |
Output Schema
| Name | Required | Description |
|---|---|---|
| tags | No | |
| chain | Yes | |
| scope | No | |
| score | Yes | |
| address | Yes | |
| balance | No | |
| sources | No | |
| tx_count | Yes | |
| win_rate | No | |
| last_seen | No | |
| defi_loans | No | |
| first_seen | No | |
| risk_level | Yes | |
| duration_ms | Yes | |
| entity_name | No | |
| usd_balance | No | |
| data_quality | No | |
| pnl_realized | No | |
| sharpe_ratio | No | |
| counterparties | No | |
| defi_positions | No | |
| pnl_unrealized | No | |
| token_balances | No | |
| detected_chains | No | |
| quality_reasons | No | |
| related_wallets | No | |
| entity_confidence | No | |
| new_explorer_assets_raw | No | |
| new_explorer_labels_raw | No | |
| new_explorer_search_raw | No | |
| new_explorer_overview_raw | No | |
| new_explorer_labels_multi_chain_raw | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds useful behavioral context beyond annotations: it mentions pagination for transfers and hints at filtering capabilities (time or value). However, it doesn't fully address the destructiveHint=true annotation (e.g., what gets destroyed or modified), which is a minor gap given the annotations already cover key traits like read-only status and idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: it starts with the core purpose, then adds usage guidance in a single, efficient sentence. Every word serves a clear purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, destructive hint) and the presence of both rich annotations and an output schema, the description is complete enough. It covers purpose, differentiation from siblings, and key behavioral aspects, leaving detailed parameter and output documentation to the structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the description doesn't need to detail parameters. It implies scope-specific summaries and filtering (min_value_usd), but adds no new syntax or format details beyond what the schema provides. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Wallet profile: labels, balances, risk, and scope-specific summary') and distinguishes it from its sibling 'info_onchain_get_address_transactions' by noting that detailed transaction lists require that alternative tool. This provides precise differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool versus an alternative: 'Paginated transfer list with time or value filters→get_address_transactions.' This provides clear guidance that detailed transaction filtering should use the sibling tool, making it easy for an agent to choose correctly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_onchain_get_address_transactionsADestructiveInspect
Paginated transfer list with time, direction, and min-value filters. Wallet profile or scoped balance summary→get_address_info.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain id (eth, trx, bsc, btc, sol, base, arb, ...); default eth. | |
| limit | No | Max rows; default 20, max 400; maps to explorer page_size. Some filters apply only to GET list paths. | |
| address | Yes | On-chain address. | |
| tx_type | No | transfer | contract_call | token_transfer | all (default). | |
| end_time | No | End Unix seconds. | |
| start_time | No | Start Unix seconds; pair with end_time; absolute wins over time_range. | |
| time_range | No | Relative window vs start/end: 1h|24h|1d|7d|30d|90d as [now-Δ, now]. Ignored if start_time or end_time set. | |
| to_address | No | Filter recipient; maps to explorer GET to or POST body to. BlockInfo path ignores. | |
| from_address | No | Filter sender; maps to explorer GET from or POST transfers from. BlockInfo path ignores. | |
| min_value_usd | No | If >0, keep rows with value_usd >= threshold (same semantics as get_address_info min_value_usd). May map to explorer query when supported. | |
| nonzero_value | No | If true, drop zero-amount token_transfer rows; passes nonzero_value to explorer POST when used. | |
| upstream_raw_mode | No | off (default) | lite (per-item source_raw) | full (page + item raw). include_upstream_raw=true forces full. | |
| include_upstream_raw | No | If true, include new_explorer_address_*_raw and per-item source_raw; large payload. Default false. |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | Yes | |
| count | Yes | |
| items | Yes | |
| limit | No | |
| total | Yes | |
| address | Yes | |
| sources | No | |
| tx_type | Yes | |
| end_time | No | |
| start_time | No | |
| time_range | No | |
| duration_ms | Yes | |
| data_quality | No | |
| transactions | Yes | |
| min_value_usd | No | |
| nonzero_value | No | |
| quality_reasons | No | |
| new_explorer_address_transfers_raw | No | |
| new_explorer_address_transactions_raw | No | |
| new_explorer_address_token_transfers_raw | No | |
| new_explorer_address_sol_transactions_raw | No | |
| new_explorer_address_internal_transactions_raw | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide key behavioral hints (readOnlyHint: false, destructiveHint: true, etc.), so the description's burden is lower. It adds useful context about pagination and filtering capabilities, but doesn't disclose additional behavioral traits like rate limits, auth needs, or what 'destructive' entails in this context. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences that are front-loaded and zero waste. Every word earns its place by stating the core functionality and providing a clear alternative tool reference.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high schema coverage (100%), presence of annotations, and output schema, the description is reasonably complete. It covers the main purpose and key usage guideline, though for a tool with 13 parameters and destructiveHint: true, it could benefit from more behavioral context like what 'destructive' means here or performance considerations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 13 parameters thoroughly. The description adds minimal value beyond the schema by mentioning 'time, direction, and min-value filters,' which aligns with parameters like start_time, end_time, to_address, from_address, and min_value_usd, but doesn't provide additional syntax or format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as retrieving a 'paginated transfer list' with specific filters (time, direction, min-value), which is a specific verb+resource combination. It distinguishes from the sibling tool 'get_address_info' by mentioning that tool is for 'wallet profile or scoped balance summary,' though it doesn't explicitly differentiate from other transaction-related tools like 'get_transaction.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool vs. an alternative by stating 'Wallet profile or scoped balance summary→get_address_info,' explicitly naming the sibling tool for different use cases. However, it doesn't mention when not to use this tool or provide exclusions relative to other transaction-related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_onchain_get_token_onchainBDestructiveInspect
Token scope: holders, transfers, activity, smart_money. Address ledger→get_address_transactions. One tx→get_transaction.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain; omit for service default. | |
| scope | No | holders|activity|transfers|smart_money|full; default full. | |
| token | Yes | Token symbol e.g. ETH, USDT, or contract address. | |
| upstream_raw_mode | No | off (default) | lite (row source_raw) | full (page + row). include_upstream_raw=true forces full. | |
| include_upstream_raw | No | If true, include explorer_*_page raw and transfer source_raw; large payload. Default false. |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | Yes | |
| scope | Yes | |
| token | Yes | |
| holders | No | |
| sources | No | |
| activity | No | |
| transfers | No | |
| token_info | No | |
| duration_ms | Yes | |
| smart_money | No | |
| data_quality | No | |
| quality_reasons | No | |
| explorer_token_raw | No | |
| explorer_holders_raw | No | |
| explorer_transfers_raw | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, but the description doesn't explain what gets destroyed or the nature of the mutation, leaving behavioral traits unclear. It adds no context on auth needs, rate limits, or side effects beyond the annotations, failing to compensate for the lack of behavioral disclosure in the description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with key information about scope and sibling tools, with no wasted sentences. However, it lacks a clear opening statement of purpose, which slightly reduces its structural effectiveness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, destructive hint, open world hint) and the presence of an output schema, the description is incomplete. It doesn't address the destructive nature hinted in annotations or provide usage context beyond sibling references, leaving gaps in understanding the tool's full behavior and risks.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal value by listing scope options ('holders, transfers, activity, smart_money'), but this is already covered in the schema's description for 'scope'. No additional syntax or format details are provided beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description mentions 'Token scope: holders, transfers, activity, smart_money' which hints at retrieving token-related data, but it doesn't clearly state the verb (e.g., 'retrieve' or 'fetch') or explicitly define the resource. It distinguishes from siblings by referencing 'Address ledger→get_address_transactions' and 'One tx→get_transaction', but the core purpose remains somewhat vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool versus alternatives by stating 'Address ledger→get_address_transactions. One tx→get_transaction.', which explicitly directs users to other tools for address-specific or single-transaction queries. However, it doesn't specify when not to use this tool or compare it to all siblings like 'info_coin_get_coin_info'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_onchain_get_transactionADestructiveInspect
One transaction by hash; fields sparse if upstream thin. Token view→get_token_onchain.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain e.g. eth, bsc, btc; default eth. | |
| tx_hash | Yes | Transaction hash. Response may omit fields when upstream data is incomplete. | |
| upstream_raw_mode | No | off (default) | lite | full; only full adds explorer/blockinfo detail raw. include_upstream_raw=true forces full. | |
| include_upstream_raw | No | If true, include explorer_detail_raw and blockinfo_detail_raw; large payload. Default false. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | No | |
| from | No | |
| chain | Yes | |
| value | No | |
| tx_fee | No | |
| sources | No | |
| to_tags | No | |
| tx_hash | Yes | |
| tx_tags | No | |
| gas_used | No | |
| from_tags | No | |
| value_usd | No | |
| block_time | No | |
| tx_fee_usd | No | |
| duration_ms | Yes | |
| method_name | No | |
| block_height | No | |
| contract_ret | No | |
| data_quality | No | |
| instructions | No | |
| program_logs | No | |
| quality_reasons | No | |
| token_transfers | No | |
| sol_balance_change | No | |
| explorer_detail_raw | No | |
| blockinfo_detail_raw | No | |
| token_balance_change | No | |
| internal_transactions | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it warns that 'fields sparse if upstream thin' (data completeness issues) and mentions 'Response may omit fields when upstream data is incomplete' in the schema. Annotations already indicate this is destructive (destructiveHint: true) and non-idempotent (idempotentHint: false), but the description provides practical implementation details about data reliability that aren't captured in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two short phrases that each serve distinct purposes: the first states the core functionality, the second provides important behavioral context and alternative guidance. There's zero wasted language, and the information is front-loaded appropriately for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has comprehensive annotations (readOnlyHint, destructiveHint, etc.), 100% schema coverage, and an output schema exists, the description provides adequate context. It covers the core purpose, data reliability caveats, and points to an alternative tool. The main gap is lack of explicit guidance on when to use this versus other transaction-related siblings beyond the token reference.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all 4 parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema descriptions. It mentions 'upstream thin' which relates to the tx_hash parameter behavior, but this is already covered in the schema's tx_hash description. The baseline 3 is appropriate when schema does all the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'One transaction by hash' with the specific resource being a blockchain transaction. It distinguishes from sibling 'info_onchain_get_token_onchain' by mentioning 'Token view→get_token_onchain' for alternative use cases. However, it doesn't fully differentiate from other transaction-related siblings like 'info_onchain_get_address_transactions' beyond the single vs. multiple distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('One transaction by hash') and explicitly mentions an alternative ('Token view→get_token_onchain'). However, it doesn't specify when NOT to use it or provide guidance on prerequisites like required chain selection, nor does it compare with other transaction-related siblings beyond the token reference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_get_bridge_metricsADestructiveInspect
Bridge ranking or one-bridge chain breakdown. Cross-sector dashboard across DeFi, spot, perp, stablecoin, or bridge→get_defi_overview.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain; normalized e.g. eth→Ethereum; filters chain_breakdown. | |
| limit | No | Rows; default 10, max 400. | |
| sort_by | No | volume_24h (default)|volume_7d|deposit_txs_24h. | |
| bridge_name | No | Bridge name; omit for ranked list without chain_breakdown. |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | Yes | |
| count | Yes | |
| items | Yes | |
| limit | Yes | |
| total | Yes | |
| sort_by | Yes | |
| bridge_name | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, readOnlyHint=false, openWorldHint=true, and idempotentHint=false. The description doesn't contradict these annotations—it doesn't claim the tool is read-only or non-destructive. While it doesn't explicitly add behavioral details beyond annotations (e.g., rate limits or auth needs), the annotations are comprehensive, so the bar is lower. The description adds value by specifying the cross-sector scope, which isn't covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that packs in purpose, scope, and an alternative reference, making it relatively concise. However, it's somewhat dense and could benefit from clearer structuring (e.g., separating purpose from usage guidance). It's not overly verbose, but the information is crammed together, reducing readability slightly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, destructive operation), annotations provide good behavioral context (destructive, open-world, etc.), and an output schema exists, so the description doesn't need to explain return values. The description covers purpose and usage with an alternative, which is adequate. However, it could be more complete by clarifying the relationship with other sibling tools or detailing output structure, though the output schema mitigates this gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all 4 parameters (chain, limit, sort_by, bridge_name). The description doesn't add any parameter-specific semantics beyond what the schema provides. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description, which applies here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool provides 'Bridge ranking or one-bridge chain breakdown' and mentions 'Cross-sector dashboard across DeFi, spot, perp, stablecoin', which gives a general purpose. However, it's somewhat vague about the exact output format and doesn't clearly differentiate from sibling tools like 'info_platformmetrics_get_defi_overview' beyond mentioning it as an alternative. The purpose is understandable but lacks specificity about what 'ranking' or 'breakdown' entails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for bridge metrics analysis across multiple sectors (DeFi, spot, perp, stablecoin). It explicitly mentions an alternative ('bridge→get_defi_overview'), which helps guide usage. However, it doesn't specify when NOT to use this tool or compare it to other sibling tools beyond the one mentioned, leaving some ambiguity about its unique role in the broader toolset.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_get_defi_overviewADestructiveInspect
Cross-sector dashboard for DeFi, spot, perp, stablecoin, or bridge categories. One bridge ranking or chain detail→get_bridge_metrics. One protocol→get_platform_info.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | all|defi|spot|perp|stablecoin|bridge or platform type; unknown values passed through; empty if no match. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| total | Yes | |
| category | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide critical behavioral information (destructiveHint: true, readOnlyHint: false, etc.). The description adds some context about the cross-sector nature and category scope, but doesn't elaborate on what 'destructive' means in this context or provide additional behavioral details like rate limits, authentication needs, or specific consequences of the destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that pack substantial information. The first sentence states the purpose and scope, the second provides crucial usage guidelines. Every word earns its place with zero wasted text, making it highly efficient for an AI agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations covering key behavioral aspects, 100% schema coverage, and an output schema exists (so return values needn't be explained), the description provides good contextual completeness. It covers purpose, scope, and usage guidelines effectively, though it could provide more context about what 'destructive' means specifically for this dashboard tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single 'category' parameter. The description mentions 'defi, spot, perp, stablecoin, or bridge categories' which aligns with but doesn't add meaningful semantics beyond what the schema provides ('all|defi|spot|perp|stablecoin|bridge'). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides a 'cross-sector dashboard' for specific DeFi-related categories, which is a specific verb+resource combination. It distinguishes from sibling tools by mentioning 'get_bridge_metrics' and 'get_platform_info' for more specific use cases, though it doesn't explicitly differentiate from all siblings in the same category.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use alternatives: 'One bridge ranking or chain detail→get_bridge_metrics. One protocol→get_platform_info.' This clearly indicates when NOT to use this tool and names specific sibling tools for those scenarios, making it highly actionable for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_get_exchange_reservesADestructiveInspect
Exchange reserve snapshots by venue and asset. TVL history→get_platform_history.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | Asset e.g. BTC|ETH|USDT; empty = any. | |
| period | No | 24h|7d snapshot window; empty = no time filter. | |
| exchange | No | Exchange name; empty = any. |
Output Schema
| Name | Required | Description |
|---|---|---|
| asset | Yes | |
| count | Yes | |
| items | Yes | |
| total | Yes | |
| period | Yes | |
| exchange | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a destructive, non-idempotent, open-world operation. The description adds no behavioral context beyond what annotations provide, such as rate limits, authentication needs, or what 'destructive' entails in this context. However, it doesn't contradict annotations, so it meets the lower bar with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—just one sentence plus an alternative reference. It's front-loaded with the core purpose and wastes no words. Every element (resource scope and alternative) earns its place efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 parameters, destructive operation) and the presence of annotations and an output schema, the description is reasonably complete. It covers the purpose and provides an alternative, though it could benefit from more behavioral context given the destructive hint. The output schema reduces the need to explain return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all parameters (asset, period, exchange). The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving exchange reserve snapshots by venue and asset. It specifies the resource (exchange reserves) and scope (by venue and asset). However, it doesn't explicitly differentiate from sibling tools like 'info_platformmetrics_get_platform_history' beyond mentioning it as an alternative for TVL history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by mentioning 'TVL history→get_platform_history' as an alternative, indicating when to use the sibling tool instead. It doesn't explicitly state when not to use this tool or compare with other siblings like 'info_platformmetrics_get_bridge_metrics', but the alternative guidance is helpful.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_get_liquidation_heatmapBDestructiveInspect
Liquidation density by symbol and price buckets.
| Name | Required | Description | Default |
|---|---|---|---|
| range | No | Price range or bucket spec. | |
| symbol | Yes | Symbol e.g. BTC, ETH. | |
| exchange | No | Exchange; empty = all venues. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| range | Yes | |
| total | Yes | |
| symbol | Yes | |
| exchange | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, readOnlyHint=false, openWorldHint=true, and idempotentHint=false. The description doesn't contradict these but adds minimal behavioral context beyond them. It hints at data aggregation ('density by symbol and price buckets') but doesn't elaborate on side effects, rate limits, authentication needs, or what 'destructive' entails (e.g., data modification or high resource usage). With annotations covering key traits, the description adds some value but lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient phrase ('Liquidation density by symbol and price buckets') that is front-loaded with the core purpose. It wastes no words and is appropriately sized for a data retrieval tool, with every element contributing directly to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial data analysis), annotations provide safety and behavior hints, and an output schema exists (so return values needn't be described), the description is mostly complete. It covers the key concept (liquidation heatmap) but could better integrate with annotations (e.g., explain destructive implications) or sibling context. Overall, it's adequate but has minor gaps in contextual richness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear parameter descriptions in the schema (e.g., 'Symbol e.g. BTC, ETH', 'Exchange; empty = all venues'). The description mentions 'symbol and price buckets', aligning with the 'symbol' and 'range' parameters, but adds no additional meaning beyond the schema. For high schema coverage, the baseline is 3, as the description doesn't compensate with extra parameter insights.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Liquidation density by symbol and price buckets' clearly states what the tool does (analyzes liquidation density) and identifies key dimensions (symbol and price buckets). It distinguishes from siblings by focusing on liquidation heatmaps rather than general platform metrics like bridge metrics or exchange reserves. However, it doesn't specify the exact verb (e.g., 'retrieve' or 'calculate') or resource type (e.g., data from which platform).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools (e.g., info_platformmetrics_get_exchange_reserves or info_platformmetrics_get_platform_history) or specify contexts where a liquidation heatmap is preferred over other market data tools. Usage is implied only by the tool name and description, with no explicit when/when-not statements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_get_platform_historyADestructiveInspect
Daily TVL, volume, and fees time series for one named protocol. One-shot profile→get_platform_info. Ranked multi-protocol list→search_platforms.
| Name | Required | Description | Default |
|---|---|---|---|
| metrics | No | tvl|volume|fees|revenue; default [tvl] when empty. | |
| end_date | No | End YYYY-MM-DD. | |
| start_date | No | Start YYYY-MM-DD. | |
| platform_name | Yes | Protocol or platform name. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| total | Yes | |
| metrics | Yes | |
| end_date | Yes | |
| start_date | Yes | |
| duration_ms | Yes | |
| platform_name | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints (destructiveHint: true, readOnlyHint: false, etc.), but the description adds useful context about the data being 'daily' time series and the default behavior when 'metrics' is empty. However, it doesn't elaborate on what 'destructive' means in this context or address rate limits, which would be valuable given the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and well-structured in two sentences: the first states the core functionality, and the second provides clear usage guidelines with sibling tool references. Every word serves a purpose with zero wasted text, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations, a complete input schema, and an output schema (implied by 'Has output schema: true'), the description provides adequate context for tool selection. It covers purpose, differentiation, and basic behavioral context. The main gap is lack of detail about what 'destructive' means operationally, but overall it's reasonably complete for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds minimal semantic context by mentioning 'tvl|volume|fees|revenue' in the metrics description, but this largely repeats schema information. The baseline of 3 is appropriate since the schema carries most of the parameter documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving daily TVL, volume, and fees time series for a specific protocol. It uses specific verbs ('get') and resources ('platform history'), and explicitly distinguishes from sibling tools like 'get_platform_info' and 'search_platforms' by contrasting single-protocol vs. multi-protocol use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'One-shot profile→get_platform_info' indicates this tool is for historical time series data, while 'Ranked multi-protocol list→search_platforms' clarifies it's for single-protocol analysis versus multi-protocol comparisons. This directly addresses sibling tool differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_get_platform_infoADestructiveInspect
One-shot protocol profile by name and scope. Ranked multi-protocol list→search_platforms. Daily time series→get_platform_history.
| Name | Required | Description | Default |
|---|---|---|---|
| scope | No | basic (default)|with_chain_breakdown (adds tvl_by_chain, volume_by_chain)|full (adds history_30d, top_pools). | |
| platform_name | Yes | Protocol or platform name. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | |
| items | Yes | |
| scope | Yes | |
| total | Yes | |
| duration_ms | Yes | |
| platform_name | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide key behavioral hints (readOnlyHint: false, destructiveHint: true, etc.), but the description adds context by implying it's a one-shot operation and specifying scope options. However, it doesn't fully disclose potential impacts of destructiveHint or rate limits. No contradiction with annotations, so score reflects moderate added value beyond structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and well-structured, using only two sentences that efficiently convey purpose and usage guidelines without any wasted words. It's front-loaded with the core function and immediately provides alternative tool references.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (2 parameters, destructive operation) and the presence of both annotations and an output schema, the description is reasonably complete. It covers purpose and usage well, though could benefit from more behavioral context given the destructive hint. The output schema reduces the need to explain return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description mentions 'by name and scope' which aligns with parameters but doesn't add significant meaning beyond what the schema provides. Baseline score of 3 is appropriate as the schema carries the burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as a 'one-shot protocol profile by name and scope,' which is specific about retrieving protocol information. It distinguishes from siblings by mentioning search_platforms for ranked multi-protocol lists and get_platform_history for daily time series, though it could be more explicit about the exact resource (e.g., protocol metrics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines by stating when to use this tool versus alternatives: use this for one-shot protocol profiles by name and scope, use search_platforms for ranked multi-protocol lists, and use get_platform_history for daily time series. This clearly differentiates it from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_get_stablecoin_infoADestructiveInspect
Stablecoin-only ranking or per-symbol chain breakdown. Broad market dashboard→get_market_overview. Cross-sector rollup→get_defi_overview.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain filter; list mode restricts to stables with presence on chain. | |
| limit | No | List size; default 10, max 400; single row when symbol set. | |
| symbol | No | Stablecoin symbol USDT/USDC/DAI; omit for ranked list. |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | Yes | |
| count | Yes | |
| items | Yes | |
| limit | Yes | |
| total | Yes | |
| symbol | Yes | |
| duration_ms | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide significant behavioral information (readOnlyHint=false, destructiveHint=true, etc.), so the bar is lower. The description adds some context about the tool's scope (stablecoin-specific) and output modes (ranking vs. breakdown), which is useful. However, it doesn't elaborate on the destructive nature or other behavioral traits beyond what annotations already declare.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded, with every sentence earning its place. The first sentence states the core purpose, and the next two provide clear usage alternatives. There's zero wasted text, making it highly efficient for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (covering safety and behavior), 100% schema coverage, and the presence of an output schema, the description is complete enough. It provides purpose, differentiation from siblings, and usage context without needing to repeat structured information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description mentions 'ranking or per-symbol chain breakdown' which aligns with the symbol parameter's purpose, but doesn't add significant semantic value beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'Stablecoin-only ranking or per-symbol chain breakdown.' It distinguishes from siblings by explicitly naming alternatives for broader contexts: 'Broad market dashboard→get_market_overview. Cross-sector rollup→get_defi_overview.' This provides precise differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It states 'Broad market dashboard→get_market_overview. Cross-sector rollup→get_defi_overview,' clearly indicating sibling tools for different use cases. This helps the agent choose appropriately based on the required scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_get_yield_poolsADestructiveInspect
Lending and LP pools by APY or TVL. One protocol profile→get_platform_info. TVL series→get_platform_history.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain filter. | |
| limit | No | Rows; default 20, max 400. | |
| symbol | No | Asset e.g. USDC, ETH-USDC. | |
| project | No | Protocol e.g. aave-v3. | |
| sort_by | No | apy|tvl_usd; default apy. | |
| pool_type | No | Exposure single|multi or Lending|LP|Staking; maps to ES exposure. | |
| min_tvl_usd | No | Min TVL USD; default 100000 when omitted; 0 = no floor. |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | Yes | |
| count | Yes | |
| items | Yes | |
| limit | Yes | |
| total | Yes | |
| symbol | Yes | |
| project | Yes | |
| sort_by | Yes | |
| pool_type | Yes | |
| duration_ms | Yes | |
| min_tvl_usd | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a write operation, but the description implies a read-only query (retrieving pools). This creates a contradiction, as the description doesn't explain any destructive behavior. The description adds minimal behavioral context beyond annotations, such as sorting options, but fails to address the destructive hint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences, front-loading the core purpose and efficiently listing alternatives. Every word serves a purpose, with no wasted text or redundancy, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, destructive annotations) and the presence of an output schema, the description is moderately complete. It covers purpose and alternatives but lacks crucial context about the destructive behavior hinted in annotations, which could mislead an agent. The output schema reduces the need to explain return values, but the contradiction with annotations is a significant gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all 7 parameters. The description adds no additional parameter semantics, such as clarifying relationships between parameters (e.g., how 'chain' interacts with 'project') or providing examples beyond what's in the schema. Baseline 3 is appropriate since the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'Lending and LP pools by APY or TVL', specifying both the resource (pools) and sorting criteria. It distinguishes from siblings by mentioning 'One protocol profile→get_platform_info' and 'TVL series→get_platform_history', though it doesn't explicitly differentiate from all other platformmetrics tools like 'get_bridge_metrics' or 'get_exchange_reserves'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit alternatives for specific use cases: 'One protocol profile→get_platform_info' and 'TVL series→get_platform_history'. However, it doesn't offer guidance on when to use this tool versus other filtering tools like 'search_platforms' or general context about when to prefer this over other platformmetrics tools, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
info_platformmetrics_search_platformsBDestructiveInspect
Ranked multi-protocol list with filter and sort. One-shot profile→get_platform_info. One named protocol daily series→get_platform_history.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain filter. | |
| limit | No | Rows; default 20, max 400. | |
| sort_by | No | tvl|volume_24h|volume_spot_24h|volume_perps_24h|fees_24h; default tvl. | |
| platform_type | No | CEX|DEX|Lending|Bridge|Yield|Derivatives|... |
Output Schema
| Name | Required | Description |
|---|---|---|
| chain | Yes | |
| count | Yes | |
| items | Yes | |
| limit | Yes | |
| total | Yes | |
| sort_by | Yes | |
| duration_ms | Yes | |
| platform_type | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, readOnlyHint=false, openWorldHint=true, and idempotentHint=false, but the description doesn't add meaningful behavioral context beyond this. It doesn't explain what 'destructive' means in this context (e.g., does it modify data or have side effects?), nor does it address rate limits, authentication needs, or output behavior. With annotations covering some traits, the description adds minimal value, but fails to clarify the destructive nature implied by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, front-loaded with the core function ('Ranked multi-protocol list with filter and sort') and followed by usage guidance. There's no wasted text, but the second sentence could be clearer in its phrasing, slightly affecting readability without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, annotations, and full schema coverage, the description is somewhat complete but lacks depth. It doesn't fully explain the tool's purpose in the context of sibling tools or the destructive behavior hinted by annotations. For a tool with 4 parameters and complex annotations, more context on behavior and output would be beneficial, but the existing structured data mitigates major gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with clear descriptions for all 4 parameters (e.g., 'chain filter,' 'Rows; default 20, max 400'). The description doesn't add any parameter semantics beyond what the schema provides, such as explaining 'platform_type' values or 'sort_by' implications. Baseline 3 is appropriate since the schema does the heavy lifting, but no extra value is added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it provides a 'Ranked multi-protocol list with filter and sort,' which indicates a search/ranking function for platforms. However, it's vague about the specific resource (e.g., what 'platforms' refer to—DeFi protocols, exchanges?) and doesn't clearly differentiate from siblings like 'info_platformmetrics_get_platform_info' or 'info_platformmetrics_get_platform_history,' though it mentions them in a different context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidance by stating 'One-shot profile→get_platform_info. One named protocol daily series→get_platform_history,' which helps distinguish when to use this tool versus alternatives. However, it lacks explicit exclusions or when-not-to-use scenarios, and the guidance is somewhat cryptic without further context on what 'one-shot profile' or 'daily series' entail.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!