Skip to main content
Glama

Gate MCP

Server Details

Public Gate market data MCP for spot, futures, margin, options, delivery, earn, and alpha.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
gate/gate-mcp
GitHub Stars
27

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 63 of 63 tools scored. Lowest: 2.5/5.

Server CoherenceA
Disambiguation4/5

Tools are well-organized by product categories (e.g., cex_spot, cex_fx, cex_options) with clear distinctions between different asset classes and data types. However, some overlap exists within categories—for example, multiple 'list' and 'get' tools for similar data (like tickers, candlesticks) across different products could cause minor confusion if the agent doesn't carefully note the prefix.

Naming Consistency5/5

Naming follows a highly consistent pattern: all tools use snake_case with a structured prefix (e.g., cex_spot_get_spot_tickers, cex_fx_list_fx_contracts). The verb_noun format is uniform, and prefixes clearly indicate the product area, making the set predictable and easy to navigate.

Tool Count2/5

With 63 tools, the count is excessive for a single server, even given the broad scope of a cryptocurrency exchange API. This many tools can overwhelm agents, increase complexity, and likely includes redundant or niche endpoints that don't all earn their place in a coherent toolset.

Completeness5/5

The toolset provides comprehensive coverage across multiple exchange products (spot, futures, options, margin, etc.), including data retrieval for contracts, order books, trades, and analytics. There are no obvious gaps for the domain, supporting full lifecycle and informational needs for cryptocurrency trading and analysis.

Available Tools

63 tools
cex_activity_list_activity_typesB
Read-onlyIdempotent
Inspect

List activity types.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds no behavioral context beyond what annotations provide, such as rate limits or authentication needs, but doesn't contradict them either.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's appropriately sized for a simple tool and front-loaded with the core action, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is minimally adequate. However, it lacks context about what 'activity types' are or how they're used, which could help the agent understand the tool's role better.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description doesn't need to add parameter details, so it meets the baseline expectation for a parameterless tool without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List activity types' clearly states the verb ('List') and resource ('activity types'), providing basic purpose. However, it doesn't differentiate from sibling tools (all starting with 'cex_activity_') or specify what 'activity types' refers to in this context, making it somewhat vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any context, prerequisites, or exclusions, leaving the agent with no usage instructions beyond the basic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_alpha_list_alpha_currenciesB
Read-onlyIdempotent
Inspect

List Alpha currencies.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
limitNoMaximum number of records returned in a single list
currencyNoQuery by currency symbol
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or pagination behavior. However, it doesn't contradict annotations, so it meets the lower bar with annotations present but adds minimal extra value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action and resource, making it easy to scan. Every word earns its place, and there's no unnecessary elaboration or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (a read-only list operation), rich annotations (covering safety and idempotency), and high schema coverage, the description is minimally adequate. However, it lacks output details (no output schema) and doesn't clarify the 'Alpha' context, leaving gaps in understanding the tool's full scope. It's complete enough for basic use but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear parameter descriptions (page, limit, currency). The description doesn't add any semantic details beyond what the schema provides, such as default values, valid ranges, or examples. Given the high schema coverage, a baseline score of 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List Alpha currencies' clearly states the action (list) and resource (Alpha currencies), but it's vague about what 'Alpha currencies' specifically means and doesn't differentiate from sibling tools like 'cex_spot_list_currencies' or 'cex_earn_list_uni_currencies'. It provides basic purpose but lacks specificity about the domain or scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. With many sibling tools for listing currencies (e.g., cex_spot_list_currencies, cex_earn_list_uni_currencies), there's no indication of what makes 'Alpha currencies' distinct or when to prefer this tool over others. Usage is implied only by the tool name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_alpha_list_alpha_tickersB
Read-onlyIdempotent
Inspect

List Alpha tickers.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
limitNoMaximum number of records returned in a single list
currencyNoQuery by specified currency name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide strong behavioral hints (readOnlyHint: true, destructiveHint: false, openWorldHint: true, idempotentHint: true), so the agent knows this is a safe, read-only operation. The description adds no additional behavioral context beyond the basic action, such as pagination behavior, rate limits, or authentication needs. Since annotations cover the safety profile well, a baseline 3 is appropriate—the description doesn't contradict annotations but adds minimal value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise ('List Alpha tickers.'), with no wasted words. It's front-loaded and gets straight to the point, making it efficient for an agent to parse. Every word earns its place, though this brevity contributes to gaps in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (list operation with pagination/filtering), rich annotations (covering safety and idempotency), and full schema coverage, the description is minimally adequate. However, without an output schema, it doesn't explain what 'Alpha tickers' are or the return format, leaving some contextual gaps. The annotations and schema compensate partially, but the description could do more to clarify the tool's domain-specific context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for all three parameters (page, limit, currency). The description doesn't add any semantic context beyond what the schema provides, such as explaining what 'Alpha tickers' are or how parameters interact. Given high schema coverage, the baseline score of 3 is warranted—the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List Alpha tickers' clearly states the verb ('List') and resource ('Alpha tickers'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'cex_alpha_list_alpha_currencies' or 'cex_alpha_list_alpha_tokens'—it's clear what it does but not how it differs from similar listing tools in the same domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools for listing different types of data (e.g., currencies, tokens, tickers in other domains), there's no indication of what 'Alpha tickers' are specifically or when this tool is preferred over others like 'cex_dc_list_dc_tickers' or 'cex_fx_get_fx_tickers'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_alpha_list_alpha_tokensC
Read-onlyIdempotent
Inspect

List Alpha tokens.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
chainNoChain: solana, eth, bsc, base, world, sui, arbitrum, avalanche, polygon, linea, optimism, zksync, gatelayer
addressNoQuery by contract address
launch_platformNoLaunch platform: meteora_dbc, fourmeme, moonshot, pump, raydium_launchlab, letsbonk, gatefun, virtuals
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds no behavioral context beyond the name, such as pagination details, rate limits, or authentication needs. It doesn't contradict annotations, but provides minimal extra value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's appropriately sized and front-loaded, making it easy to parse quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no output schema) and rich annotations, the description is incomplete. It lacks details on what 'Alpha tokens' are, how results are structured, or any usage context, leaving gaps despite the annotations covering safety aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all four parameters. The description adds no meaning beyond the schema, not explaining how parameters interact or their impact on results. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List Alpha tokens' clearly states the action (list) and resource (Alpha tokens), but it's vague about what 'Alpha tokens' are and doesn't differentiate from siblings like 'cex_alpha_list_alpha_currencies' or 'cex_alpha_list_alpha_tickers'. It provides basic purpose but lacks specificity about scope or context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives is provided. The description doesn't mention any context, prerequisites, or exclusions, and with many sibling tools available, this omission leaves the agent without direction on selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_dc_get_dc_contractB
Read-onlyIdempotent
Inspect

Get delivery contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
settleNoSettlement currency for delivery (e.g. usdt, btc)
contractYesDelivery contract identifier (e.g. BTC_USDT)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent, and open-world operation, covering key behavioral traits. The description adds no additional context beyond the basic action, such as rate limits, authentication needs, or what 'get' specifically returns (e.g., contract details, status). With annotations providing safety and idempotency, the bar is lower, but the description lacks enriching details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with a single sentence 'Get delivery contract.', which is front-loaded and wastes no words. It efficiently states the core purpose without unnecessary elaboration, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 parameters, 1 required) and rich annotations (readOnlyHint, openWorldHint, etc.), the description is minimally adequate. However, with no output schema, it doesn't explain what is returned (e.g., contract details, fields), leaving a gap in understanding the result. The annotations cover safety, but the description lacks completeness for a retrieval operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters ('contract' as identifier, 'settle' as settlement currency). The description adds no meaning beyond the schema, not explaining parameter interactions (e.g., if 'settle' is required for certain contracts) or examples. Baseline is 3 since the schema does the heavy lifting, but no extra value is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get delivery contract' states the basic action (get) and resource (delivery contract), but is vague about what 'get' entails—does it retrieve details, status, or something else? It doesn't differentiate from siblings like 'cex_dc_list_dc_contracts' (which likely lists multiple contracts) or 'cex_fx_get_fx_contract' (for a different contract type), leaving ambiguity in scope and specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't explain if this is for retrieving a single contract by identifier versus listing multiple contracts (as in 'cex_dc_list_dc_contracts'), or when to use the optional 'settle' parameter. The absence of usage context makes it hard for an agent to choose appropriately among similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_dc_list_dc_candlesticksB
Read-onlyIdempotent
Inspect

List delivery candlesticks.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds
fromNoStart timestamp in seconds
limitNoMaximum number of data points to return
settleNoSettlement currency for delivery (e.g. usdt, btc)
contractYesDelivery contract identifier (e.g. BTC_USDT; may use mark_/index_ prefix for price type)
intervalNoTime interval for data points
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, openWorldHint, idempotentHint, destructiveHint) already indicate this is a safe, read-only, idempotent operation with open-world data. The description doesn't contradict these but adds minimal behavioral context beyond the name. It implicitly suggests listing data (consistent with annotations), but doesn't detail aspects like pagination (implied by 'limit' parameter), rate limits, or authentication needs. With annotations covering core safety, the description adds little extra value, warranting a score above baseline but not high.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description 'List delivery candlesticks' is extremely concise—a single, front-loaded sentence with zero wasted words. It efficiently conveys the core action and resource without redundancy or fluff, making it easy to parse. Every word earns its place, though this conciseness comes at the cost of detail in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema) and rich annotations (covering safety and idempotency), the description is minimally adequate. It states the purpose but lacks context on usage, parameter relationships (e.g., how 'interval' affects candlesticks), or output format. With annotations handling behavioral transparency and schema covering parameters, the description meets a basic threshold but leaves gaps in guidance and semantics, making it incomplete for optimal agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with each parameter (e.g., 'contract', 'interval', 'from', 'to') clearly documented in the schema. The description adds no additional meaning about parameters beyond what the schema provides, such as explaining 'delivery candlesticks' in relation to 'contract' or 'settle'. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List delivery candlesticks' clearly states the verb ('List') and resource ('delivery candlesticks'), making the purpose understandable. However, it lacks specificity about what 'delivery candlesticks' are (e.g., financial data for delivery contracts) and doesn't differentiate from sibling tools like 'cex_fx_get_fx_candlesticks' or 'cex_spot_get_spot_candlesticks', leaving ambiguity about the domain (delivery vs. futures vs. spot).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools (e.g., 'cex_fx_get_fx_candlesticks' for futures or 'cex_spot_get_spot_candlesticks' for spot markets) or specify use cases like retrieving historical price data for delivery contracts. Without such context, the agent must infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_dc_list_dc_contractsB
Read-onlyIdempotent
Inspect

List delivery contracts.

ParametersJSON Schema
NameRequiredDescriptionDefault
settleNoSettlement currency for delivery (e.g. usdt, btc)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide strong behavioral hints (readOnly, openWorld, idempotent, non-destructive), so the bar is lower. The description adds no additional behavioral context beyond what annotations declare. It doesn't mention pagination, rate limits, authentication needs, or what 'delivery contracts' specifically entail in this context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise at three words, front-loading the essential action and resource with zero wasted language. Every word earns its place, making it immediately scannable and understandable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single parameter with full schema coverage and comprehensive annotations covering safety and behavior, the description is adequate but minimal. However, without an output schema and with many similar sibling tools, the description could better help the agent understand what distinguishes this tool and what to expect in return.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents the single parameter ('settle'). The description adds no parameter information beyond what's in the schema, so it meets the baseline but doesn't provide additional semantic context about how the parameter affects results or typical usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('delivery contracts'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'cex_dc_get_dc_contract' (singular vs plural) or 'cex_fx_list_fx_contracts' (delivery vs other contract types), which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools that also list contracts (e.g., 'cex_fx_list_fx_contracts', 'cex_options_list_options_contracts'), the agent receives no help distinguishing between delivery contracts and other contract types or understanding the specific use case for this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_dc_list_dc_insurance_ledgerB
Read-onlyIdempotent
Inspect

List delivery insurance ledger.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of records to return
settleNoSettlement currency for delivery (e.g. usdt, btc)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide strong behavioral hints (readOnlyHint: true, openWorldHint: true, idempotentHint: true, destructiveHint: false), so the description's burden is lower. The description doesn't contradict these annotations and adds the specific domain context ('delivery insurance ledger'), but it doesn't provide additional behavioral details like pagination, rate limits, or authentication requirements that would be helpful beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just three words ('List delivery insurance ledger'), with zero wasted language. It's front-loaded with the core action and resource, making it efficient for quick understanding despite its simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations (covering safety and idempotency) and complete parameter documentation in the schema, the description provides adequate basic context. However, without an output schema and with multiple similar list operations in the sibling tools, the description could benefit from more differentiation or context about what specifically gets returned in the 'delivery insurance ledger'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, both parameters ('limit' and 'settle') are fully documented in the input schema. The description doesn't add any parameter-specific information beyond what's already in the schema, so it meets the baseline expectation without providing extra semantic context about how these parameters affect the listing operation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List delivery insurance ledger' clearly states the verb ('List') and resource ('delivery insurance ledger'), making the basic purpose understandable. However, it doesn't specify what constitutes the 'delivery insurance ledger' or differentiate it from similar sibling tools like 'cex_fx_list_fx_insurance_ledger' or other list operations in the same domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple list operations available (e.g., 'cex_dc_list_dc_contracts', 'cex_dc_list_dc_trades'), there's no indication whether this is for delivery-specific insurance data, what context it applies to, or any prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_dc_list_dc_order_bookC
Read-onlyIdempotent
Inspect

Get delivery order book.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of order depth levels to return
settleNoSettlement currency for delivery (e.g. usdt, btc)
with_idNoWhether to return order book update ID
contractYesDelivery contract identifier (e.g. BTC_USDT)
intervalNoOrder depth aggregation precision. 0 means no aggregation, defaults to 0 if not specified
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or what 'delivery' entails. However, it doesn't contradict the annotations, so it meets the lower bar with annotations present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's appropriately sized for a tool with good annotations and schema coverage, though it lacks depth. Every word earns its place, making it front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (delivery order book with 5 parameters), rich annotations, and no output schema, the description is incomplete. It doesn't explain what a 'delivery order book' returns, how it differs from other order books, or any behavioral nuances. The annotations and schema carry most of the burden, but the description fails to add necessary context for effective tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting all 5 parameters. The description adds no parameter semantics beyond what the schema provides. According to the rules, with high schema coverage (>80%), the baseline is 3 even without param info in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get delivery order book' restates the tool name 'cex_dc_list_dc_order_book' without adding specificity. It doesn't clarify what a 'delivery order book' is or how it differs from other order book tools like 'cex_fx_get_fx_order_book' or 'cex_spot_get_spot_order_book' in the sibling list. This is a tautology that provides minimal value beyond the name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple order book tools in the sibling list (e.g., for delivery, FX, spot, options), there's no indication of what distinguishes this delivery-focused tool from others. No context, prerequisites, or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_dc_list_dc_risk_limit_tiersB
Read-onlyIdempotent
Inspect

List delivery risk limit tiers.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of records to return
offsetNoList offset, starting from 0
settleNoSettlement currency for delivery (e.g. usdt, btc)
contractNoOptional delivery contract identifier; if set, risk tiers for that contract only
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds no additional behavioral context (e.g., rate limits, auth needs, or what 'risk limit tiers' entail), but it doesn't contradict annotations. It's minimal but consistent, earning a baseline score for not undermining structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, direct sentence with zero wasted words. It's front-loaded and efficiently conveys the core action without unnecessary elaboration, making it highly concise and well-structured for its purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations provide safety and behavioral hints (read-only, etc.) and the schema fully documents parameters, the description is minimally adequate. However, with no output schema and a vague purpose, it lacks completeness in explaining what 'risk limit tiers' are or the return format, leaving gaps for an AI agent to infer context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter descriptions in the input schema. The tool description adds no parameter semantics beyond what's already documented (e.g., it doesn't explain 'settle' or 'contract' further). This meets the baseline of 3, as the schema carries the full burden effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List delivery risk limit tiers' clearly states the verb ('List') and resource ('delivery risk limit tiers'), but it's vague about what these tiers represent and doesn't differentiate from sibling tools like 'cex_fx_get_fx_risk_limit_table' or other list tools. It provides basic purpose but lacks specificity about the domain context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context (e.g., for delivery contracts), or comparisons to siblings like 'cex_dc_list_dc_contracts' or 'cex_fx_get_fx_risk_limit_table'. Usage is implied only through the tool name, with no explicit instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_dc_list_dc_tickersB
Read-onlyIdempotent
Inspect

List delivery tickers.

ParametersJSON Schema
NameRequiredDescriptionDefault
settleNoSettlement currency for delivery (e.g. usdt, btc)
contractNoOptional delivery contract identifier; if set, only that contract's ticker is returned
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds no behavioral details beyond this, such as rate limits, authentication needs, or data format. Since annotations provide comprehensive coverage, the description doesn't need to repeat them, but it also doesn't add extra context (e.g., what 'delivery' entails). No contradiction exists, so a score above baseline is given for not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with 'List delivery tickers,' a single sentence that front-loads the core action and resource. There is no wasted text, repetition, or unnecessary elaboration, making it efficient and easy to parse. This brevity is appropriate given the tool's straightforward purpose and comprehensive annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (a read-only list operation), rich annotations (covering safety and idempotency), and no output schema, the description is minimally adequate. It states what the tool does but lacks context about the domain (e.g., delivery markets in crypto exchanges) or output details. Without an output schema, the description doesn't explain return values, but annotations mitigate some gaps. This results in a baseline score for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear parameter descriptions: 'settle' specifies settlement currency (e.g., usdt, btc) and 'contract' filters by contract identifier. The description 'List delivery tickers' doesn't add any parameter semantics beyond the schema, such as examples or constraints. With high schema coverage, the baseline is 3, as the schema does the heavy lifting without description enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List delivery tickers' clearly states the verb ('List') and resource ('delivery tickers'), providing a basic understanding of the tool's function. However, it lacks specificity about what 'delivery tickers' are in this context (e.g., financial instruments for delivery contracts) and doesn't differentiate from sibling tools like 'cex_dc_list_dc_contracts' or 'cex_fx_get_fx_tickers', which might list similar data for different markets. This makes the purpose somewhat vague but not misleading.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools such as 'cex_dc_list_dc_contracts' (which might list contracts instead of tickers) or 'cex_fx_get_fx_tickers' (for different market types), nor does it specify prerequisites like authentication or context (e.g., for delivery vs. spot markets). This absence leaves the agent without explicit usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_dc_list_dc_tradesB
Read-onlyIdempotent
Inspect

List delivery trades.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds
fromNoStart timestamp in seconds
limitNoMaximum number of records to return
settleNoSettlement currency for delivery (e.g. usdt, btc)
last_idNoSpecify list starting point using the last record ID from previous request
contractYesDelivery contract identifier (e.g. BTC_USDT)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent, and open-world operation, covering key behavioral traits. The description adds no additional context beyond the basic action, such as pagination behavior (implied by 'last_id' parameter) or rate limits. It does not contradict annotations, but offers minimal value beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence ('List delivery trades.') with no wasted words. It is front-loaded and directly states the tool's action, making it easy to parse quickly. This minimalism is effective for a simple list operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a read-only list operation with 6 parameters), annotations cover safety and idempotency, and the schema fully describes inputs. However, there is no output schema, and the description does not explain return values (e.g., trade structure or pagination details). This leaves gaps in understanding the full behavior, though annotations mitigate some risks.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the input schema (e.g., 'contract' as delivery contract identifier, 'from/to' as timestamps). The description adds no extra meaning or examples beyond this, such as clarifying timestamp formats or typical use cases for 'settle'. Baseline score of 3 is appropriate given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List delivery trades' clearly states the verb ('List') and resource ('delivery trades'), providing a basic purpose. However, it lacks specificity about what 'delivery trades' are (e.g., futures settlement trades) and does not differentiate from siblings like 'cex_fx_get_fx_trades' or 'cex_options_list_options_trades', which might list trades for other instrument types. This makes it vague in the broader context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., needing a contract identifier), exclusions, or comparisons to sibling tools like 'cex_dc_list_dc_contracts' (which might list contracts rather than trades). Without such context, users must infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_earn_list_dual_investment_plansB
Read-onlyIdempotent
Inspect

List dual investment plans.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNoInvestment currency, e.g. BTC
pageNoPage number
sortNoSort: apy (APR desc), short-period (term asc), multiple (premium desc)
typeNoput (low buy) or call (high sell)
plan_idNoFinancial project ID (int64)
page_sizeNoItems per page
quote_currencyNoSettlement currency: default USDT, optional GUSD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, read-only, idempotent operation. The description adds no behavioral context beyond what annotations provide (e.g., no rate limits, authentication needs, or pagination behavior). However, it doesn't contradict annotations, so it meets the lower bar with annotations present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a simple list operation and front-loads the core purpose immediately, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations cover safety and idempotency, and the schema fully documents parameters, the description is minimally complete for a list tool. However, with no output schema, it doesn't explain return values (e.g., structure of listed plans), and it lacks context on filtering/scoping compared to siblings, leaving gaps in full usability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all 7 parameters well-documented in the input schema (e.g., coin, sort options, type definitions). The description adds no parameter semantics beyond what the schema provides, but with high schema coverage, the baseline is 3 as the schema carries the full burden of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List dual investment plans' clearly states the action (list) and resource (dual investment plans), but it's vague about scope and doesn't distinguish from sibling tools like 'cex_earn_list_earn_fixed_term_products' or 'cex_earn_list_uni_currencies' that also list financial products. It provides basic purpose but lacks specificity about what makes dual investment plans unique.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple 'list' tools in the sibling set (e.g., for fixed-term products, currencies, contracts), there's no indication of when dual investment plans are appropriate versus other financial products, nor any prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_earn_list_earn_fixed_term_productsB
Read-onlyIdempotent
Inspect

List fixed-term Earn products.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageYesPage number
typeNoProduct type: 1=normal, 2=vip
assetNoCurrency name, e.g. USDT
limitYesPage size
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior, so the description doesn't need to repeat these. It adds no additional behavioral context (e.g., pagination details, rate limits, or response format), but doesn't contradict annotations, providing minimal extra value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, direct sentence with no wasted words, making it highly concise and front-loaded. It efficiently communicates the core purpose without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations (covering safety and behavior) and full schema coverage, the description is adequate for a simple list tool. However, without an output schema, it doesn't explain return values, and it lacks guidance on usage relative to siblings, leaving some contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the input schema. The description adds no semantic details beyond the schema, such as explaining filtering logic or default values, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('fixed-term Earn products'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from its sibling 'cex_earn_list_earn_fixed_term_products_by_asset', which appears to be a more filtered version, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like the sibling 'cex_earn_list_earn_fixed_term_products_by_asset' or other list tools. The description lacks context on use cases, prerequisites, or exclusions, offering minimal usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_earn_list_earn_fixed_term_products_by_assetB
Read-onlyIdempotent
Inspect

List fixed-term Earn products by asset.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoProduct type: empty or 1=normal, 2=vip, 0=all
assetYesCurrency name, e.g. USDT, BTC
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds no behavioral context beyond what annotations provide, such as rate limits, authentication needs, or response format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's appropriately sized and front-loaded, directly stating the tool's purpose without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations cover safety and idempotency, and the schema fully documents parameters, the description is minimally adequate. However, without an output schema, it doesn't explain return values, and it lacks context about when to use this versus the sibling tool, leaving gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('type' and 'asset') fully documented in the schema. The description mentions 'by asset' which aligns with the required 'asset' parameter but adds no additional semantic meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and target resource ('fixed-term Earn products by asset'), providing a specific verb+resource combination. However, it doesn't distinguish this tool from its sibling 'cex_earn_list_earn_fixed_term_products' (which appears to list all fixed-term products without asset filtering), missing explicit differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, context for filtering by asset, or comparison to the sibling tool that lists all fixed-term products, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_earn_list_uni_currenciesB
Read-onlyIdempotent
Inspect

List Simple Earn currencies.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and open-world, so the description adds no behavioral traits beyond these. It doesn't provide additional context like rate limits, authentication needs, or response format, but it doesn't contradict the annotations either.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's front-loaded with the core purpose and avoids any redundant or verbose language, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is minimally adequate. However, it lacks context about what 'Simple Earn' entails or how the output is structured, which could help an agent use it more effectively in a broader workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter information is needed. The description appropriately doesn't mention parameters, which is sufficient for a baseline score of 4, as it doesn't add unnecessary details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and the resource ('Simple Earn currencies'), providing a specific verb+resource combination. However, it doesn't differentiate from similar sibling tools like 'cex_spot_list_currencies' or 'cex_mcl_list_multi_collateral_currencies', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools listing different types of currencies (e.g., spot, margin, options), there's no indication of the specific context or domain for 'Simple Earn currencies', leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_fx_candlesticksB
Read-onlyIdempotent
Inspect

Get futures candlestick/OHLCV data

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds, defaults to current time if not specified
fromNoStart timestamp in seconds. Defaults to to - 100 * interval if not specified
limitNoMaximum number of data points to return. Mutually exclusive with from/to parameters
settleNoSettlement currency
contractYesFutures contract name
intervalNoTime interval. Note: 1w means natural week, 7d aligns with Unix epoch, 30d means natural month
timezoneNoTimezone: all/utc0/utc8, defaults to utc0
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. It adds value by specifying the data type (OHLCV) and asset class (futures), but lacks details on rate limits, authentication needs, or response format, which would enhance context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste—'Get futures candlestick/OHLCV data'—front-loading the core purpose without unnecessary elaboration. It's appropriately sized for a tool with good schema and annotation support.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema) and rich annotations, the description is minimally adequate. It states what data is retrieved but doesn't cover output format, error handling, or sibling differentiation. With annotations handling safety and idempotency, it meets a basic threshold but lacks depth for optimal agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing clear documentation for all 7 parameters. The description adds no additional parameter semantics beyond what's in the schema, such as explaining relationships between parameters or usage examples. Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'futures candlestick/OHLCV data', making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_dc_list_dc_candlesticks' or 'cex_options_list_options_candlesticks', which appear to serve similar functions for different asset types, so it misses full sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools for candlesticks across different asset types (e.g., DC, options, spot), there's no indication of the specific context for futures data, prerequisites, or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_fx_contractB
Read-onlyIdempotent
Inspect

Get details of a single futures contract

ParametersJSON Schema
NameRequiredDescriptionDefault
settleNoSettlement currency
contractYesFutures contract name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key behavioral hints (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. It adds minimal context by specifying 'details' of a contract, but doesn't disclose additional traits like rate limits, error handling, or response format. No contradiction with annotations is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words, front-loading the core action ('Get details'). It's appropriately sized for a simple lookup tool, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 parameters, no output schema) and rich annotations covering safety and behavior, the description is minimally adequate. However, it lacks details on output format or error cases, which could be helpful despite annotations. It meets basic needs but doesn't fully leverage the context to be highly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter descriptions in the schema itself. The description adds no extra meaning beyond implying 'contract' is required for fetching details, which is already covered. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('details of a single futures contract'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_fx_list_fx_contracts' (which likely lists multiple contracts) or 'cex_dc_get_dc_contract' (which handles a different contract type), missing full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as 'cex_fx_list_fx_contracts' for listing multiple contracts or 'cex_fx_get_fx_tickers' for different data types. It lacks explicit context, prerequisites, or exclusions, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_fx_funding_rateB
Read-onlyIdempotent
Inspect

Get funding rate history for a futures contract

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds
fromNoStart timestamp in seconds
limitNoMaximum number of records to return
settleNoSettlement currency
contractYesFutures contract name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds minimal context by implying historical data retrieval, but it does not disclose additional behaviors like rate limits, error conditions, or response format, which could be useful given the lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and wastes no space, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations provide safety and idempotency info, and the schema covers parameters fully, the description is minimally adequate. However, with no output schema and a read-only tool that retrieves historical data, it could benefit from more context on return format or data scope to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description does not add any semantic details beyond what the schema provides, such as explaining the relationship between 'from' and 'to' timestamps or typical values for 'limit'. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'funding rate history for a futures contract', making the purpose specific and understandable. However, it does not explicitly differentiate from sibling tools like 'cex_fx_list_batch_fx_funding_rates', which might offer batch or different scoping, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other funding rate-related tools in the sibling list. There is no mention of prerequisites, context, or exclusions, leaving usage unclear beyond the basic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_fx_order_bookB
Read-onlyIdempotent
Inspect

Get futures order book

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of order depth levels to return
settleNoSettlement currency
with_idNoWhether to return order book update ID. ID increments by 1 on each change
contractYesFutures contract name
intervalNoOrder depth aggregation precision. 0 means no aggregation, defaults to 0 if not specified
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide strong behavioral hints (readOnlyHint: true, destructiveHint: false, openWorldHint: true, idempotentHint: true), so the description doesn't need to repeat safety information. The description adds no additional behavioral context beyond what's implied by 'Get' (a read operation). It doesn't mention rate limits, authentication needs, or what 'order book' entails (e.g., bid/ask levels). With annotations covering core traits, a baseline score of 3 is appropriate as the description doesn't contradict annotations but adds minimal extra value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient phrase ('Get futures order book') with zero wasted words. It's front-loaded with the core action and resource. For a tool with comprehensive annotations and schema, this brevity is appropriate and earns full marks for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, 1 required), rich annotations (covering safety and idempotency), and 100% schema coverage, the description is minimally adequate. However, it lacks output schema, and the description doesn't explain what an 'order book' returns (e.g., bid/ask arrays, timestamps). With annotations handling behavioral aspects but no output details, the description leaves gaps in contextual understanding, scoring at the minimum viable level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all 5 parameters well-documented in the input schema (e.g., 'contract' as 'Futures contract name', 'limit' as 'Maximum number of order depth levels'). The description adds no parameter information beyond what the schema provides. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get futures order book' clearly states the action ('Get') and resource ('futures order book'), making the purpose understandable. However, it doesn't differentiate this tool from sibling tools like 'cex_dc_list_dc_order_book' or 'cex_spot_get_spot_order_book', which appear to serve similar order book functions for different markets. The description is specific enough to understand what it does but lacks sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context for 'futures' versus other order book types, or refer to sibling tools. Without this information, an AI agent must infer usage from the tool name alone, which is insufficient for optimal tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_fx_premium_indexA
Read-onlyIdempotent
Inspect

Get premium index (mark price minus index price) history for a contract

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds, defaults to current time if not specified
fromNoStart timestamp in seconds. Defaults to to - 100 * interval if not specified
limitNoMaximum number of data points to return. Mutually exclusive with from/to parameters
settleNoSettlement currency
contractYesFutures contract name
intervalNoTime interval for data points
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, open-world, idempotent, and non-destructive behavior, so the bar is lower. The description adds valuable context by specifying it retrieves 'history' (implying time-series data) and clarifying the premium index calculation, which isn't covered by annotations. However, it doesn't mention rate limits, authentication needs, or response format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('Get', 'premium index', 'history', 'contract') earns its place, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, no output schema), the description is reasonably complete. It defines the premium index calculation and indicates historical data retrieval. However, without an output schema, it doesn't describe the return format (e.g., array structure, data fields), which could be helpful for the agent to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description doesn't add any parameter-specific information beyond what's in the schema, such as example values or constraints. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('premium index history for a contract'), including the precise calculation ('mark price minus index price'). It distinguishes from siblings like 'cex_fx_get_fx_candlesticks' or 'cex_fx_get_fx_funding_rate' by focusing on premium index data rather than other contract metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, typical use cases, or how it relates to other FX tools in the sibling list, leaving the agent to infer usage context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_fx_risk_limit_tableB
Read-onlyIdempotent
Inspect

Get a specific risk limit tier table by table ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
settleYesSettlement currency: usdt or btc
table_idYesRisk limit table ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation with open-world data. The description adds minimal behavioral context beyond this, such as what 'risk limit tier table' entails or any rate limits. It doesn't contradict annotations, but offers little extra insight into behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, direct sentence with no wasted words, efficiently conveying the core action. It is appropriately sized and front-loaded, making it easy to parse quickly without unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations cover safety and idempotency, and the schema fully describes parameters, the description is minimally adequate. However, without an output schema, it doesn't explain return values or structure, and it lacks context about the data domain (e.g., what a 'risk limit tier table' is), leaving gaps in completeness for a tool with specific financial terminology.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents both parameters ('settle' and 'table_id'). The description implies parameter usage ('by table ID') but adds no additional meaning, syntax, or examples beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and target ('a specific risk limit tier table by table ID'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_dc_list_dc_risk_limit_tiers' or 'cex_fx_list_fx_contracts', which might retrieve similar data in different contexts or formats.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, such as listing risk limit tiers or other FX-related tools. The description lacks context about prerequisites, typical use cases, or comparisons to sibling tools, leaving the agent without explicit usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_fx_tickersB
Read-onlyIdempotent
Inspect

Get ticker information for futures contracts

ParametersJSON Schema
NameRequiredDescriptionDefault
settleNoSettlement currency
contractNoFutures contract name. Only returns data for this contract if specified
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and open-world, so the agent knows it's a safe query operation. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or what 'ticker information' includes. With annotations covering the safety profile, a 3 is appropriate—the description doesn't add value but doesn't contradict annotations either.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it easy to parse quickly. Every word earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations cover safety traits and the schema fully documents parameters, the description is minimally adequate for a read-only query tool. However, with no output schema and no details on what 'ticker information' entails, there's a gap in understanding the return format. It's complete enough for basic use but lacks depth for full contextual awareness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters ('settle' and 'contract'). The description doesn't add any meaning beyond what's in the schema, such as examples or usage patterns. Baseline 3 is correct when the schema does all the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('ticker information for futures contracts'), making the purpose specific and understandable. However, it doesn't explicitly distinguish this tool from its sibling 'cex_fx_list_fx_contracts' or other ticker-related tools, which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools like 'cex_fx_get_fx_contract' and 'cex_fx_list_fx_contracts', there's no indication of how this tool differs in scope or when it's preferred, leaving the agent to guess based on naming alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_fx_tradesB
Read-onlyIdempotent
Inspect

Get recent public trades for a futures contract

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds, defaults to current time if not specified
fromNoStart timestamp in seconds. If not specified, returns records limited by to and limit
limitNoMaximum number of records to return
offsetNoList offset, starting from 0
settleNoSettlement currency
last_idNoSpecify list starting point using the last record ID from previous request
contractYesFutures contract name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits (read-only, open-world, idempotent, non-destructive), so the description's burden is lower. It adds context by specifying 'public trades' and 'recent', which clarifies data scope, but does not detail rate limits, authentication needs, or pagination behavior. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words, front-loading the core purpose efficiently. It avoids redundancy and is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations cover safety and idempotency, and the schema fully documents parameters, the description is adequate but basic. It lacks output details (no schema provided) and does not explain trade-offs or error handling, leaving gaps in completeness for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing full parameter documentation. The description mentions 'recent' which aligns with 'from' and 'to' parameters but does not add meaningful semantics beyond the schema. With high coverage, the baseline score of 3 is appropriate as the description offers minimal extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('recent public trades for a futures contract'), making the purpose evident. However, it does not explicitly differentiate from sibling tools like 'cex_fx_list_fx_contracts' or 'cex_fx_get_fx_tickers', which might handle related data, so it lacks sibling distinction for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other 'cex_fx_' tools for different data types or 'cex_dc_list_dc_trades' for different contract types. It mentions 'recent' but does not specify timeframes or exclusions, leaving usage context implied at best.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_get_index_constituentsB
Read-onlyIdempotent
Inspect

Get constituent assets and weights for a futures index

ParametersJSON Schema
NameRequiredDescriptionDefault
indexYesIndex name
settleNoSettlement currency
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, read-only, idempotent operation with open-world data. The description adds no behavioral traits beyond this, such as rate limits, authentication needs, or data freshness, but doesn't contradict the annotations either.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations cover safety and idempotency, and the schema fully documents parameters, the description is adequate for a read-only tool. However, with no output schema, it doesn't explain return values (e.g., format of assets/weights), leaving a gap in completeness for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('index' and 'settle') clearly documented in the schema. The description doesn't add any meaning beyond what the schema provides, such as examples or constraints, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('constituent assets and weights for a futures index'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_fx_get_fx_contract' or 'cex_fx_get_fx_tickers', which also retrieve FX-related data but for different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or specific contexts for usage, leaving the agent to infer based on the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_list_batch_fx_funding_ratesB
Read-onlyIdempotent
Inspect

Get current funding rates for multiple contracts in one request

ParametersJSON Schema
NameRequiredDescriptionDefault
settleNoSettlement currency
contractsYesComma-separated list of contract names, e.g. BTC_USDT,ETH_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds value by specifying 'in one request' (batch behavior), but doesn't disclose additional traits like rate limits, authentication needs, or response format. With annotations providing core behavioral info, a 3 is appropriate for the added context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words, front-loading the core action ('Get current funding rates'). It's appropriately sized for the tool's complexity, earning full marks for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (batch read operation), rich annotations cover safety and idempotency, and 100% schema coverage documents parameters. However, no output schema exists, and the description doesn't explain return values or error handling. It's adequate but has gaps in output context, scoring a minimum viable 3.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter descriptions in the schema (e.g., 'Comma-separated list of contract names'). The description adds no extra parameter semantics beyond implying batch processing via 'multiple contracts', which aligns with the schema. Baseline 3 is correct when the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('current funding rates for multiple contracts'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_fx_get_fx_funding_rate' (singular vs. batch), which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as the sibling 'cex_fx_get_fx_funding_rate' for single contracts. It lacks explicit context, prerequisites, or exclusions, offering minimal usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_list_contract_statsB
Read-onlyIdempotent
Inspect

Get contract statistics (open interest, long/short ratio, etc.)

ParametersJSON Schema
NameRequiredDescriptionDefault
fromNoStart timestamp in seconds
limitNoMaximum number of data points to return
settleNoSettlement currency
contractYesFutures contract name
intervalNoTime interval for data points
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key behavioral hints (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. It adds some context by specifying the type of statistics returned, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or response format details that would be helpful given the lack of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose with no wasted words. It's appropriately sized and front-loaded, making it easy to understand at a glance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations cover safety and idempotency, and the schema fully documents parameters, the description is adequate for a read-only tool. However, with no output schema and many similar sibling tools, it lacks completeness in explaining the return format and differentiation from alternatives, which could hinder agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are documented in the input schema. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain relationships between parameters like 'from' and 'interval'). This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('contract statistics'), and provides examples of what statistics are included ('open interest, long/short ratio, etc.'). However, it doesn't explicitly differentiate from sibling tools like 'cex_fx_get_fx_contract' or 'cex_fx_get_fx_tickers', which might also retrieve contract-related data but for different aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools in the 'cex_fx' category (e.g., 'cex_fx_get_fx_contract', 'cex_fx_get_fx_tickers'), there's no indication of what makes this tool distinct or when it should be preferred over others for retrieving contract data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_list_fx_contractsB
Read-onlyIdempotent
Inspect

List all perpetual futures contracts

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of records to return
offsetNoList offset, starting from 0
settleNoSettlement currency
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. It adds minimal context by specifying 'perpetual futures contracts' as the resource type, but doesn't disclose additional behaviors like pagination handling, rate limits, or authentication needs. With annotations providing safety profile, a 3 is appropriate for limited added value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it easy to parse. Every word earns its place, achieving optimal conciseness for a list operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (list operation), rich annotations (covering safety and idempotency), and full schema coverage, the description is mostly complete. However, without an output schema, it doesn't hint at return format (e.g., array of contracts with fields), leaving a minor gap. For a read-only list tool with good annotations, this is adequate but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for 'limit', 'offset', and 'settle' parameters. The description doesn't add any semantic details beyond what the schema provides (e.g., default values, usage examples, or constraints). Baseline 3 is correct when schema fully documents parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('perpetual futures contracts') with the scope 'all'. However, it doesn't explicitly differentiate from sibling tools like 'cex_fx_get_fx_contract' (singular retrieval) or 'cex_fx_list_contract_stats' (statistics listing), which would require a 5. The purpose is specific but lacks sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or compare to siblings like 'cex_fx_list_contract_stats' for statistical data or 'cex_fx_get_fx_contract' for single contract details. Usage is implied by the name but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_list_fx_insurance_ledgerB
Read-onlyIdempotent
Inspect

Get futures insurance fund history

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of records to return
settleNoSettlement currency
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering key behavioral traits. The description adds minimal context by specifying 'history', implying temporal data retrieval, but doesn't elaborate on rate limits, authentication needs, or response format. With annotations doing heavy lifting, this is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action and resource, making it easy to parse quickly. Every word earns its place, achieving optimal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 optional parameters, no output schema) and rich annotations, the description is minimally adequate. It states what the tool does but lacks details on output format, error handling, or sibling differentiation. With annotations covering safety and idempotency, it meets basic needs but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear parameter descriptions ('limit' and 'settle'). The description adds no additional parameter semantics beyond what the schema provides, such as default values or usage examples. Given high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get futures insurance fund history' clearly states the verb ('Get') and resource ('futures insurance fund history'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_dc_list_dc_insurance_ledger' (which appears to be a similar insurance ledger tool for a different product type), so it misses the highest clarity level.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools (e.g., 'cex_dc_list_dc_insurance_ledger' for a different ledger type), there's no indication of context, prerequisites, or exclusions, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_fx_list_fx_liq_ordersB
Read-onlyIdempotent
Inspect

Get personal futures liquidation history.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds
fromNoStart timestamp in seconds
limitNoMaximum number of records to return
settleNoSettlement currency
contractNoFutures contract name. Only returns data for this contract if specified
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. It adds context by specifying 'personal' history, which implies user-specific data rather than general market data, but doesn't mention rate limits, authentication needs, or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It's front-loaded with the core purpose ('Get personal futures liquidation history') and doesn't include unnecessary details, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations (read-only, open-world, etc.) and full schema coverage, the description is reasonably complete for a read operation. However, without an output schema, it doesn't explain the return format (e.g., list structure, fields), which could be helpful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add any parameter-specific information beyond implying time-based filtering through 'history', which aligns with the 'from' and 'to' parameters but doesn't provide extra semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'personal futures liquidation history', making the purpose understandable. However, it doesn't differentiate from sibling tools like 'cex_fx_list_fx_contracts' or 'cex_fx_list_contract_stats', which also list futures-related data but for different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools for futures data (e.g., cex_fx_list_fx_contracts, cex_fx_list_contract_stats), there's no indication of when liquidation history is needed over other futures information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_launch_get_candy_drop_activity_list_v4A
Read-onlyIdempotent
Inspect

List CandyDrop activities with optional filters (public, no auth)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax rows to return, default 10, max 30
offsetNoOffset for pagination, default 0
statusNoongoing | upcoming | ended; omit for all
currencyNoFilter by currency name
rule_nameNoTask type filter: spot, futures, deposit, invite, trading_bot, simple_earn, first_deposit, alpha, flash_swap, tradfi, etf
register_statusNoregistered | unregistered; omit for all
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, open-world, idempotent, and non-destructive traits, but the description adds valuable context: it specifies that filters are optional, mentions 'public' access (implying no authentication needed), and hints at the tool's scope. This goes beyond annotations by clarifying accessibility and filter behavior, though it could detail more about response format or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('List CandyDrop activities') and includes key details ('optional filters (public, no auth)') without unnecessary words. Every part earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 optional parameters, no output schema), the description is reasonably complete: it states the purpose, filter options, and authentication context. Annotations provide safety and behavioral hints, but the lack of an output schema means the description could better explain return values or pagination, though it's adequate for a listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all 6 parameters. The description adds minimal value by mentioning 'optional filters' but doesn't provide additional semantics or examples beyond what's in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('CandyDrop activities') with scope ('with optional filters'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_launch_get_candy_drop_activity_rules_v4' or 'cex_launch_get_hodler_airdrop_project_list', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'optional filters (public, no auth)', suggesting when filters might be applied and that authentication isn't required. However, it lacks explicit guidance on when to use this tool versus alternatives like other listing tools in the sibling set, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_launch_get_candy_drop_activity_rules_v4A
Read-onlyIdempotent
Inspect

Get CandyDrop activity rules including prize pools and tasks (public). Pass activity_id OR currency (at least one required by API)

ParametersJSON Schema
NameRequiredDescriptionDefault
currencyNoProject/currency name; use with or instead of activity_id
activity_idNoActivity ID; use with or instead of currency
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context by specifying the tool retrieves 'public' data, which suggests no authentication is needed and implies accessibility constraints. It doesn't contradict annotations, and the added public data context enhances transparency beyond the structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys purpose, scope, and usage requirements without redundancy. Every part earns its place: it specifies what is retrieved, the data included, the public nature, and the parameter logic, making it front-loaded and highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 parameters, no output schema), rich annotations (covering safety and behavior), and high schema coverage, the description is mostly complete. It adds useful public data context and parameter requirements, but lacks details on response format or error handling, which could be beneficial despite the annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter descriptions in the input schema. The description adds minimal semantics by reiterating that parameters are interchangeable ('activity_id OR currency') and that at least one is required, but doesn't provide additional details like format examples or usage nuances. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and the resource ('CandyDrop activity rules including prize pools and tasks'), specifying it's for public data. It distinguishes from siblings like 'cex_launch_get_candy_drop_activity_list_v4' by focusing on rules rather than listing activities, making the purpose specific and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance: it states that at least one parameter (activity_id OR currency) is required by the API, and clarifies the 'public' nature. This directly informs when to use the tool and what inputs are necessary, with clear parameter requirements that help avoid errors.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_launch_get_hodler_airdrop_project_listA
Read-onlyIdempotent
Inspect

List HODLer Airdrop campaigns with optional filters (public; logged-in users may see extra participation info)

ParametersJSON Schema
NameRequiredDescriptionDefault
joinNoParticipation filter: 0=all (default), 1=joined only. Omit parameter to use API default (all)
pageNoPage number, starting from 1
sizeNoItems per page, default 10
statusNoFilter: ACTIVE, UNDERWAY, PREHEAT, FINISH; omit for all
keywordNoFuzzy match on currency or project name
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable behavioral context about authentication effects ('logged-in users may see extra participation info') and the public nature of data, which goes beyond annotations. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes essential context about authentication effects. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only list tool with comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint) and full schema coverage, the description provides adequate context about authentication visibility and public data. The main gap is lack of output schema, but the description doesn't need to explain return values extensively for this type of tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are documented in the schema. The description mentions 'optional filters' generically but doesn't add specific meaning about any parameter beyond what the schema provides. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and resource 'HODLer Airdrop campaigns' with scope 'with optional filters'. It distinguishes from some siblings like 'cex_launch_get_candy_drop_activity_list_v4' by specifying the campaign type, but doesn't explicitly differentiate from all list tools. The purpose is specific but sibling differentiation is incomplete.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about visibility differences ('public; logged-in users may see extra participation info'), which helps determine when to use it. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling list tools, missing full comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_launch_list_launch_pool_projectsB
Read-onlyIdempotent
Inspect

List launch pool projects with optional filters

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number, starting from 1
statusNoProject status filter: 0=All, 1=In progress, 2=Warming up, 3=Ended, 4=In progress + Warming up
page_sizeNoItems per page, default 10, max 30
sort_typeNoSort type: 1=Max APR descending, 2=Max APR ascending
limit_ruleNoLimit rule: 0=Normal pool, 1=Newbie pool
search_coinNoReward currency & name fuzzy match
mortgage_coinNoStaking currency exact match
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and open-world, covering key behavioral traits. The description adds minimal context with 'optional filters,' but doesn't elaborate on pagination behavior, rate limits, authentication needs, or what 'launch pool projects' entail. It doesn't contradict annotations, but provides little extra value beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('List launch pool projects') and briefly notes the key feature ('with optional filters'). There is no wasted text or unnecessary elaboration, making it appropriately concise for a tool with comprehensive schema documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations (readOnlyHint, openWorldHint, etc.) and full schema coverage, the description is minimally adequate. However, without an output schema, it doesn't explain what 'launch pool projects' are or what the return values look like (e.g., project details, structure). For a tool with 7 parameters and no output schema, more context on the resource itself would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all 7 parameters well-documented in the schema itself (e.g., page numbering, status codes, sort options). The description only vaguely mentions 'optional filters' without adding any meaning beyond what the schema provides. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('launch pool projects'), providing a specific purpose. However, it doesn't differentiate this tool from its many sibling list tools (like 'cex_launch_get_candy_drop_activity_list_v4' or 'cex_earn_list_dual_investment_plans'), which all follow similar 'list [resource]' patterns. The 'optional filters' addition is helpful but generic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With 50+ sibling tools on the server, many of which are also list operations for different resources (e.g., 'cex_spot_list_currencies', 'cex_options_list_options_contracts'), the description offers no context about launch pools specifically or when this tool is appropriate compared to other list tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_margin_get_market_margin_tierB
Read-onlyIdempotent
Inspect

Get margin leverage tiers for a currency pair

ParametersJSON Schema
NameRequiredDescriptionDefault
currency_pairYesCurrency pair, e.g. BTC_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate the tool is read-only, non-destructive, idempotent, and open-world, covering key safety and behavior traits. The description adds no additional behavioral context (e.g., rate limits, authentication needs, or what 'leverage tiers' entail), but it does not contradict the annotations, so it meets the lower bar with annotations present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is front-loaded and appropriately sized for its simple function, earning full marks for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema) and rich annotations, the description is minimally adequate. However, it lacks details on return values (e.g., what 'leverage tiers' include) and does not fully compensate for the absence of an output schema, leaving some contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'currency_pair' documented as a string example. The description adds no extra meaning beyond the schema, such as format details or constraints, so it defaults to the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('margin leverage tiers for a currency pair'), making the purpose specific and understandable. However, it does not explicitly differentiate from sibling tools like 'cex_dc_list_dc_risk_limit_tiers' or 'cex_fx_get_fx_risk_limit_table', which might serve similar risk-related functions in different contexts, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other margin or risk-related tools in the sibling list. It lacks explicit context, exclusions, or prerequisites, leaving usage unclear beyond the basic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_margin_get_uni_currency_pairB
Read-onlyIdempotent
Inspect

Get details of a specific unified margin currency pair

ParametersJSON Schema
NameRequiredDescriptionDefault
currency_pairYesCurrency pair, e.g. ADA_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. It adds value by specifying 'details' of a currency pair, which hints at richer information beyond a simple identifier. However, it doesn't disclose rate limits, error conditions, or response format, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse. Every part of the sentence earns its place by conveying essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 required parameter) and rich annotations (covering safety and behavior), the description is adequate but minimal. It lacks output schema, so it doesn't explain return values, and it misses usage context. For a read-only tool with good annotations, it's complete enough to be functional but could be more informative about when and how to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with 'currency_pair' clearly documented as a string like 'ADA_USDT'. The description doesn't add any extra parameter details beyond what the schema provides, such as format constraints or examples. Given the high schema coverage, a baseline score of 3 is appropriate, as the description doesn't compensate but doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get details') and resource ('specific unified margin currency pair'), making the purpose understandable. However, it doesn't differentiate from its sibling 'cex_margin_list_uni_currency_pairs' which likely lists multiple pairs, while this tool gets details for a specific one. This distinction is implied but not explicitly stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention the sibling tool 'cex_margin_list_uni_currency_pairs' for listing pairs, nor does it specify prerequisites like authentication or context (e.g., only for margin trading). Without this, the agent lacks clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_margin_list_uni_currency_pairsA
Read-onlyIdempotent
Inspect

List all currency pairs supported for unified margin lending

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds value by specifying the 'unified margin lending' scope, which isn't captured in annotations, giving useful context about the data domain without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with zero wasted words. It's front-loaded with the core action and resource, making it immediately understandable. Every word earns its place by specifying the exact scope of the listing operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only listing tool with no parameters, no output schema, and comprehensive annotations, the description is nearly complete. It specifies the exact resource scope ('unified margin lending'), which is the main missing piece from structured data. A slight gap exists in not mentioning the return format or pagination, but annotations cover the essential behavioral traits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description doesn't need to explain parameters, and it correctly indicates no filtering or input requirements, aligning with the empty schema. No additional parameter semantics are needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and the specific resource ('all currency pairs supported for unified margin lending'), making the purpose explicit. It distinguishes from siblings like 'cex_spot_list_currency_pairs' by specifying the 'unified margin lending' context, which is crucial for correct tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing currency pairs for unified margin lending, but doesn't explicitly state when to use this tool versus alternatives like 'cex_spot_list_currency_pairs' or 'cex_alpha_list_alpha_currencies'. It provides basic context but lacks explicit comparison or exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_mcl_get_multi_collateral_current_rateA
Read-onlyIdempotent
Inspect

Get current interest rates for specified currencies in multi-collateral loans (public)

ParametersJSON Schema
NameRequiredDescriptionDefault
vip_levelNoVIP level, defaults to 0 if not specified
currenciesYesCurrency names separated by commas, e.g. BTC,ETH,USDT. Maximum 100 items
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context by specifying 'public' (indicating no authentication needed) and implying real-time data ('current'), which enhances behavioral understanding beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('Get current interest rates', 'specified currencies', 'multi-collateral loans', 'public') contributes directly to understanding the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema), the description is complete enough when combined with annotations. It covers the purpose, scope, and public nature, but lacks details on return format (e.g., rate structure or units), which would be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (currencies as comma-separated list with max 100 items, vip_level with default). The description doesn't add any parameter-specific details beyond what the schema provides, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get current interest rates'), target resource ('for specified currencies in multi-collateral loans'), and scope ('public'), distinguishing it from sibling tools like 'cex_mcl_get_multi_collateral_fix_rate' (which presumably gets fixed rates) and 'cex_mcl_list_multi_collateral_currencies' (which lists currencies).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for current interest rates in multi-collateral loans, but doesn't explicitly state when to use this tool versus alternatives like the 'fix_rate' sibling or other loan-related tools. It provides basic context but lacks explicit exclusions or comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_mcl_get_multi_collateral_fix_rateB
Read-onlyIdempotent
Inspect

Get available fixed interest rates for multi-collateral loans (public)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds minimal context with '(public)', confirming no authentication needed, but doesn't elaborate on rate limits, response format, or data freshness. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It front-loads the core purpose ('Get available fixed interest rates') and includes essential qualifiers without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with no parameters and good annotations, the description is adequate but minimal. It lacks details on output format, data scope (e.g., all currencies/timeframes), or error handling, which could be useful despite no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description doesn't need to explain parameters, and it correctly implies no inputs are required for this public query, aligning with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get available fixed interest rates') and resource ('for multi-collateral loans'), with the qualifier '(public)' indicating no authentication needed. It distinguishes from sibling 'cex_mcl_get_multi_collateral_current_rate' by specifying 'fixed' vs 'current' rates, though it doesn't explicitly mention this distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'cex_mcl_get_multi_collateral_current_rate' for variable rates or other loan-related tools. The description implies public data access but offers no explicit usage context or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_mcl_get_multi_collateral_ltvB
Read-onlyIdempotent
Inspect

Get LTV (Loan-to-Value) ratios for multi-collateral loans (public)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, open-world, idempotent, and non-destructive, covering the safety profile. The description adds minimal context with '(public)' suggesting no authentication needed, but doesn't elaborate on rate limits, response format, or data freshness. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Get LTV ratios') without unnecessary elaboration. Every word earns its place, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with no parameters and good annotations, the description is adequate but lacks output details (no output schema) and doesn't clarify scope (e.g., all loans vs. filtered). It's minimally viable but leaves gaps in understanding the return data and how it fits with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description doesn't need to compensate, and the baseline for zero parameters is 4, as it appropriately doesn't discuss parameters that don't exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get LTV ratios') and the target resource ('for multi-collateral loans'), with the qualifier '(public)' indicating it's publicly accessible data. However, it doesn't explicitly differentiate from sibling tools like 'cex_mcl_get_multi_collateral_current_rate' or 'cex_mcl_get_multi_collateral_fix_rate', which appear related but serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools in the 'cex_mcl_' category (current_rate, fix_rate, currencies), there's no indication of when LTV ratios are needed versus other multi-collateral loan metrics, nor any mention of prerequisites or constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_mcl_list_multi_collateral_currenciesA
Read-onlyIdempotent
Inspect

List all supported currencies for multi-collateral loans (public)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a safe, read-only, idempotent operation (readOnlyHint: true, destructiveHint: false, idempotentHint: true). The description adds value by specifying the scope ('public') and that it lists 'all' currencies, which provides context beyond annotations. It doesn't disclose rate limits or response format, but with good annotation coverage, this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action ('List all supported currencies') and includes essential context ('for multi-collateral loans (public)'). There is no wasted wording, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, rich annotations (readOnlyHint, openWorldHint, etc.), and no output schema, the description is adequate but minimal. It covers the basic purpose and scope but lacks details on output format or any behavioral nuances, which could be helpful for an agent despite the annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the input schema has 100% description coverage (though empty). The description doesn't need to add parameter details, so it meets the baseline. It implicitly confirms no parameters are needed by not mentioning any, which is sufficient for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('List') and resource ('all supported currencies for multi-collateral loans'), making it easy to understand what it does. However, it doesn't explicitly differentiate from sibling tools like 'cex_alpha_list_alpha_currencies' or 'cex_earn_list_uni_currencies', which also list currencies in different contexts, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other currency-listing tools in the sibling list. It includes '(public)' which hints at accessibility but doesn't specify context or exclusions, leaving the agent without clear usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_get_options_contractB
Read-onlyIdempotent
Inspect

Get details of a single options contract

ParametersJSON Schema
NameRequiredDescriptionDefault
contractYesOptions contract name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds minimal behavioral context beyond this, as it doesn't specify details like rate limits, authentication needs, or what 'details' include. It doesn't contradict annotations, so it earns a baseline score for adding some value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one required parameter) and rich annotations, the description is minimally adequate. However, without an output schema, it doesn't explain what 'details' are returned, leaving a gap. It relies heavily on structured data, making it complete enough but not fully informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'contract' parameter clearly documented. The description implies the tool retrieves details based on a contract name but doesn't add extra semantic context (e.g., format examples or constraints). This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('details of a single options contract'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_options_list_options_contracts' (which likely lists multiple contracts), leaving room for improvement in sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as 'cex_options_list_options_contracts' for listing multiple contracts or other options-related tools. It lacks explicit context or exclusions, leaving the agent to infer usage based on the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_get_options_settlementA
Read-onlyIdempotent
Inspect

Get a single options settlement record

ParametersJSON Schema
NameRequiredDescriptionDefault
atYesSettlement timestamp (Unix timestamp in seconds)
contractYesOptions contract name
underlyingYesUnderlying asset name, e.g. BTC_USDT
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, open-world, idempotent, and non-destructive traits, but the description adds value by specifying 'a single' record, implying it retrieves one item rather than a list. This clarifies the scope beyond what annotations provide, though it doesn't detail rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It's front-loaded with the core action and resource, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations cover safety traits and the schema fully describes parameters, the description is adequate for a simple read operation. However, without an output schema, it doesn't explain return values (e.g., what fields the settlement record includes), leaving a gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all three parameters. The description doesn't add any parameter-specific details beyond what's in the schema, so it meets the baseline of 3 without compensating for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('a single options settlement record'), making the purpose specific and understandable. However, it doesn't differentiate from sibling tools like 'cex_options_list_options_settlements' (which likely lists multiple settlements), missing an opportunity for explicit sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage from the name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_candlesticksB
Read-onlyIdempotent
Inspect

Get options candlestick data

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds
fromNoStart timestamp in seconds
limitNoMaximum number of records to return
contractYesOptions contract name
intervalNoTime interval between data points, e.g. 1m, 5m, 15m, 30m, 1h
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true) already indicate this is a safe, read-only, idempotent operation with open-world data. The description adds no behavioral details beyond this, such as rate limits, authentication needs, or data freshness. Since annotations cover key traits well, the description's minimal addition is acceptable, but it doesn't enhance transparency further.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with a single sentence ('Get options candlestick data'), which is front-loaded and wastes no words. It efficiently conveys the core purpose without unnecessary elaboration, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (financial data retrieval with 5 parameters), rich annotations (covering safety and idempotency), and no output schema, the description is minimally adequate. It states what the tool does but lacks details on output format, error handling, or usage context. With annotations handling behavioral aspects, the description meets a basic threshold but doesn't provide a complete picture for optimal agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting all parameters (e.g., contract, interval, from, to, limit). The description adds no additional meaning or context about parameters, such as typical usage patterns or constraints. With high schema coverage, the baseline score of 3 is appropriate, as the schema carries the full burden of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get options candlestick data' clearly states the action ('Get') and resource ('options candlestick data'), making the purpose understandable. However, it lacks specificity about what 'options candlestick data' entails (e.g., financial chart data for options contracts) and doesn't distinguish it from similar sibling tools like 'cex_options_list_options_underlying_candlesticks' or 'cex_dc_list_dc_candlesticks', leaving room for ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools involving candlesticks (e.g., for options, underlying assets, or other financial instruments), there is no indication of context, prerequisites, or exclusions to help an agent choose appropriately, relying solely on the tool name for differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_contractsB
Read-onlyIdempotent
Inspect

List all options contracts for an underlying

ParametersJSON Schema
NameRequiredDescriptionDefault
expirationNoUnix timestamp of expiration date
underlyingYesUnderlying asset name, e.g. BTC_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide clear hints: readOnlyHint=true, destructiveHint=false, openWorldHint=true, idempotentHint=true, indicating a safe, non-destructive, repeatable read operation. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or return format details, but doesn't contradict the annotations either.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words, front-loading the core action and resource. It's appropriately sized for a straightforward list operation, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (list operation), rich annotations covering safety and behavior, and full schema coverage, the description is minimally adequate. However, with no output schema and many sibling tools, it lacks context on return format, pagination, or differentiation from alternatives, leaving gaps for an agent to infer usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters ('underlying' and 'expiration'). The description mentions 'for an underlying' but adds no additional semantic context beyond what the schema provides, such as examples beyond 'BTC_USDT' or clarification on optional vs. required usage of 'expiration'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all options contracts for an underlying'), making the tool's purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_options_get_options_contract' (singular) or 'cex_options_list_options_expirations', which could cause confusion about when to use this specific list tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools in the options category (e.g., 'cex_options_get_options_contract', 'cex_options_list_options_expirations'), there's no indication of context, prerequisites, or comparative use cases, leaving the agent to guess based on naming alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_expirationsB
Read-onlyIdempotent
Inspect

List option expiration dates for an underlying

ParametersJSON Schema
NameRequiredDescriptionDefault
underlyingYesUnderlying asset name, e.g. BTC_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key behavioral hints (readOnlyHint: true, destructiveHint: false, openWorldHint: true, idempotentHint: true), so the description doesn't need to repeat these. It adds context by specifying the resource ('option expiration dates for an underlying'), but doesn't disclose additional traits like rate limits, auth needs, or output format. No contradiction with annotations is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is front-loaded and appropriately sized for its simple function, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one required parameter), rich annotations covering safety and behavior, and no output schema, the description is reasonably complete. It specifies the resource clearly, though it could benefit from mentioning output format or usage context relative to siblings. Overall, it provides enough information for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'underlying' parameter fully documented in the schema. The description implies the parameter's role ('for an underlying') but adds no extra meaning beyond what the schema provides, such as format examples or constraints. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('option expiration dates for an underlying'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'cex_options_list_options_contracts' or 'cex_options_list_options_settlements', which might also involve options data but for different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools in the options domain (e.g., 'cex_options_list_options_contracts'), there is no mention of when this specific tool is appropriate or what distinguishes it, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_order_bookC
Read-onlyIdempotent
Inspect

Get options order book

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of depth levels
with_idNoWhether to return depth update ID
contractYesOptions contract name
intervalNoPrice precision for merged depth. 0 means no merging
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true) already indicate this is a safe, read-only, idempotent operation with open-world data. The description adds no behavioral details beyond this, such as rate limits, authentication needs, or what 'order book' specifically returns (e.g., bid/ask levels). Since annotations cover core traits, the bar is lower, but the description misses opportunities to add context like data format or update frequency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with a single phrase ('Get options order book'), which is front-loaded and wastes no words. It directly states the tool's function without unnecessary elaboration, making it efficient for quick understanding. However, this conciseness comes at the cost of detail, but it earns full marks for brevity and clarity within its minimal structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (options market data with 4 parameters) and rich annotations, the description is incomplete. It lacks output details (no output schema provided), does not explain the 'order book' structure or typical use cases, and misses contextual cues like sibling differentiation. While annotations cover safety, the description fails to provide enough context for an agent to fully understand when and how to use this tool effectively, especially compared to similar tools in the list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear parameter meanings (e.g., 'contract' as options contract name, 'limit' as depth levels). The description does not add any semantic information beyond the schema, such as examples for 'contract' format or typical 'interval' values. Given high schema coverage, the baseline is 3, as the schema adequately documents parameters without extra description value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get options order book' clearly states the action ('Get') and resource ('options order book'), providing a basic purpose. However, it lacks specificity about what an 'options order book' entails (e.g., market depth data for options contracts) and does not differentiate from sibling tools like 'cex_dc_list_dc_order_book' or 'cex_fx_get_fx_order_book', which are similar for different asset types. This makes it vague in distinguishing its exact scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention sibling tools (e.g., 'cex_options_list_options_tickers' for price data or 'cex_options_list_options_trades' for trade history) or specify contexts like needing market depth for options. Without any usage context or exclusions, the agent must infer based on the name alone, which is insufficient for optimal tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_settlementsB
Read-onlyIdempotent
Inspect

List options settlement history for an underlying

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds
fromNoStart timestamp in seconds
limitNoMaximum number of records to return
offsetNoList offset, starting from 0
underlyingYesUnderlying asset name, e.g. BTC_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. It adds context by specifying 'for an underlying,' which clarifies scope, but doesn't provide additional behavioral details like pagination behavior, rate limits, or response format, which would be helpful given no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's front-loaded with the core purpose and appropriately sized for the tool's complexity, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, 1 required), rich annotations, and 100% schema coverage, the description is adequate but has gaps. It lacks output details (no schema), doesn't explain usage context relative to siblings, and misses behavioral nuances like pagination, leaving room for improvement despite good structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no additional parameter semantics beyond implying the 'underlying' parameter is required, which is already clear from the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('options settlement history for an underlying'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'cex_options_get_options_settlement' (singular vs. list) or other list tools in the options category, which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'cex_options_get_options_settlement' for single settlements or other filtering options, nor does it specify prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_tickersB
Read-onlyIdempotent
Inspect

Get options tickers for an underlying

ParametersJSON Schema
NameRequiredDescriptionDefault
underlyingYesUnderlying asset name, e.g. BTC_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits: readOnlyHint=true, destructiveHint=false, openWorldHint=true, idempotentHint=true. The description doesn't add behavioral context beyond this, such as rate limits, authentication needs, or output format. However, it doesn't contradict annotations, so it meets the lower bar with annotations present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's front-loaded and directly states the tool's purpose, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 required parameter), rich annotations, and no output schema, the description is minimally adequate. It states what the tool does but lacks details on output format or usage context, which could help the agent use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'underlying' parameter fully documented as 'Underlying asset name, e.g. BTC_USDT'. The description adds no extra parameter details beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get options tickers for an underlying' clearly states the verb ('Get') and resource ('options tickers'), specifying it's for an underlying asset. It distinguishes from siblings like 'cex_options_list_options_contracts' or 'cex_options_list_options_underlyings' by focusing on tickers, but doesn't explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Siblings include related tools like 'cex_options_list_options_underlying_tickers' and 'cex_options_list_options_contracts', but the description doesn't mention these or specify use cases, leaving the agent to infer based on naming alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_tradesB
Read-onlyIdempotent
Inspect

List public options trades

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds
fromNoStart timestamp in seconds
typeNoOption type: C for call, P for put
limitNoMaximum number of records to return
offsetNoList offset, starting from 0
contractNoOptions contract name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide excellent safety information (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true). The description adds minimal behavioral context beyond 'public' - which suggests these aren't private/user trades. However, it doesn't mention pagination behavior (implied by limit/offset), rate limits, authentication requirements, or what 'public' specifically entails. With strong annotations, the bar is lower, but more operational context would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is maximally concise - a single three-word phrase that communicates the core purpose without any wasted words. It's perfectly front-loaded with the essential information. Every word earns its place, making this an excellent example of efficient documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, no output schema) and excellent annotation coverage, the description is minimally adequate. The annotations handle safety concerns well, but the description lacks context about what 'public options trades' means operationally, how results are structured, or any performance characteristics. For a listing tool with filtering parameters, more guidance about typical use cases would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all 6 parameters clearly documented in the schema itself. The description adds no parameter information beyond what's already in the structured schema. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description, which applies here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List public options trades' clearly states the action (list) and resource (public options trades), making the tool's purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'cex_options_list_options_contracts' or 'cex_options_list_options_settlements' - while 'trades' is specific, the description could better distinguish what makes options trades unique compared to other options data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance about when to use this tool versus alternatives. With multiple sibling tools for options data (contracts, settlements, candlesticks, tickers, etc.), there's no indication whether this is for historical trades, real-time data, or how it differs from other listing tools. The agent must infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_underlying_candlesticksB
Read-onlyIdempotent
Inspect

Get candlestick data for an options underlying index

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds
fromNoStart timestamp in seconds
limitNoMaximum number of records to return
intervalNoTime interval between data points, e.g. 1m, 5m, 15m, 30m, 1h
underlyingYesUnderlying asset name, e.g. BTC_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate the tool is read-only, non-destructive, idempotent, and open-world, covering key behavioral traits. The description adds minimal context by specifying 'candlestick data,' but does not disclose additional details like rate limits, authentication needs, or data format. With annotations providing a solid foundation, the description earns a baseline 3 for not contradicting them and adding slight value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core purpose, making it easy to understand quickly. This brevity and clarity warrant a score of 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has rich annotations (read-only, idempotent, etc.) and a fully described input schema, but no output schema. The description is minimal and does not explain return values, data format, or error handling. For a data retrieval tool with multiple parameters, the description is adequate but lacks depth, resulting in a score of 3.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting all five parameters (e.g., 'underlying,' 'interval'). The description does not add any semantic details beyond this, such as examples for 'underlying' beyond what the schema provides. Given the high schema coverage, the baseline score is 3, as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get candlestick data for an options underlying index.' It specifies the verb ('Get'), resource ('candlestick data'), and target ('options underlying index'), making the intent unambiguous. However, it does not explicitly differentiate from sibling tools like 'cex_options_list_options_candlesticks' or 'cex_spot_get_spot_candlesticks,' which limits its score to 4.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lacks any mention of prerequisites, context, or exclusions, such as how it differs from other candlestick-related tools in the sibling list. This absence of usage context results in a score of 2.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_underlyingsB
Read-onlyIdempotent
Inspect

List all options underlying assets

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide strong hints (readOnlyHint: true, destructiveHint: false, openWorldHint: true, idempotentHint: true), covering safety and idempotency. The description adds no behavioral details beyond this, such as rate limits, authentication needs, or data format. However, it doesn't contradict the annotations, so it meets the lower bar with annotations present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It's front-loaded with the core action and resource, making it highly efficient and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is minimally adequate. However, it lacks details on output format or data scope, which could be helpful despite annotations covering safety aspects. It's complete enough for a basic list tool but leaves room for improvement in contextual richness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate here, warranting a baseline score above 3 due to the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all options underlying assets'), making the purpose specific and understandable. However, it doesn't distinguish this tool from sibling tools like 'cex_options_list_options_contracts' or 'cex_options_list_options_expirations', which also list options-related data but focus on different aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any context, prerequisites, or exclusions, nor does it refer to sibling tools that might serve similar or complementary purposes, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_options_list_options_underlying_tickersB
Read-onlyIdempotent
Inspect

Get ticker data for all contracts under an underlying

ParametersJSON Schema
NameRequiredDescriptionDefault
underlyingYesUnderlying asset name, e.g. BTC_USDT
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds minimal context by implying data retrieval for contracts under a specific underlying, but doesn't disclose additional traits like rate limits, authentication needs, or response format. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part earns its place by directly stating what the tool does, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is minimally adequate. It covers the basic purpose but lacks details on usage context, output format, or behavioral nuances beyond annotations, leaving gaps for an agent to infer correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'underlying' fully documented in the schema. The description adds no extra meaning beyond what's in the schema (e.g., format examples or constraints), so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get ticker data') and target ('for all contracts under an underlying'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'cex_options_list_options_tickers' or 'cex_options_list_options_contracts', which might offer similar or overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools (e.g., 'cex_options_list_options_tickers' for general ticker data or 'cex_options_list_options_contracts' for contract details), prerequisites, or specific contexts where this tool is preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_spot_get_currencyB
Read-onlyIdempotent
Inspect

Get details of a single currency

ParametersJSON Schema
NameRequiredDescriptionDefault
currencyYesCurrency name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and open-world, so the description doesn't need to repeat safety aspects. However, it adds no behavioral context beyond the basic operation—no details on rate limits, authentication needs, error handling, or what 'details' include, leaving gaps despite good annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema and comprehensive annotations, the description is minimally adequate. However, with no output schema, it doesn't explain what 'details' are returned, and it lacks context for sibling differentiation, leaving room for improvement in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'currency' parameter clearly documented. The description doesn't add any meaning beyond this, such as examples or format specifics, so it meets the baseline but provides no extra value over the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get details') and resource ('a single currency'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'cex_spot_list_currencies' which might retrieve multiple currencies, leaving some ambiguity about when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'cex_spot_list_currencies' and 'cex_alpha_list_alpha_currencies' available, there's no indication of context, prerequisites, or exclusions for this specific retrieval tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_spot_get_currency_pairA
Read-onlyIdempotent
Inspect

Get details of a single currency pair

ParametersJSON Schema
NameRequiredDescriptionDefault
currency_pairYesCurrency pair name, e.g. BTC_USDT
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, open-world, idempotent, and non-destructive, so the agent knows it's a safe lookup operation. The description adds the specific scope ('single currency pair') which provides useful behavioral context beyond the annotations. However, it doesn't mention potential rate limits, authentication requirements, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence that efficiently communicates the core purpose. There's no wasted verbiage, repetition, or unnecessary elaboration. Every word earns its place in conveying essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint) and full schema coverage, the description provides adequate context. However, without an output schema, the description doesn't indicate what 'details' include (e.g., price, volume, status), leaving some ambiguity about the return value format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'currency_pair' fully documented in the schema. The description doesn't add any additional parameter semantics beyond what the schema already provides (e.g., format constraints, validation rules, or examples beyond 'BTC_USDT'). This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'details of a single currency pair', making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'cex_spot_list_currency_pairs' (which likely lists multiple pairs) or 'cex_spot_get_currency' (which gets details of a single currency rather than a pair).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, when this tool is appropriate versus listing tools, or what constitutes a valid currency pair beyond the schema's example. This leaves the agent guessing about proper usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_spot_get_spot_candlesticksA
Read-onlyIdempotent
Inspect

Get candlestick/OHLCV data for a currency pair

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd time of K-line in Unix timestamp (seconds). Defaults to current time if not specified
fromNoStart time of K-line in Unix timestamp (seconds). Defaults to to - 100 * interval if not specified
limitNoMaximum number of data points to return. Mutually exclusive with from/to parameters
intervalNoTime interval of data points. Note: 30d represents natural month, not 30 days
currency_pairYesCurrency pair
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and open-world, covering key behavioral traits. The description adds no additional behavioral context (e.g., rate limits, authentication needs, or data freshness), but it doesn't contradict the annotations either. With comprehensive annotations, the bar is lower, and the description's simplicity is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence that states exactly what the tool does without any fluff. It's front-loaded with the core purpose and wastes no words, making it highly efficient for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations (covering safety and behavior) and full schema coverage, the description is adequate for a read-only data-fetching tool. However, without an output schema, it doesn't hint at the return format (e.g., array of candlesticks with OHLCV fields), leaving a gap in completeness for the agent to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the input schema. The description mentions 'currency pair' but adds no semantic context beyond what the schema provides (e.g., format examples or typical values). This meets the baseline score when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('candlestick/OHLCV data for a currency pair'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'cex_fx_get_fx_candlesticks' or 'cex_options_list_options_candlesticks' by specifying it's for spot markets specifically, which would have earned a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple candlestick-related tools in the sibling list (e.g., for FX, options, DC markets), there's no indication that this is specifically for spot trading pairs, nor any mention of prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_spot_get_spot_order_bookB
Read-onlyIdempotent
Inspect

Get the order book for a currency pair

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of order depth levels to return, default 10, max 100
with_idNoWhether to return order book update ID
intervalNoOrder depth aggregation precision. 0 means no aggregation
currency_pairYesCurrency pair
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits: read-only, open-world, idempotent, and non-destructive. The description adds minimal context beyond this, as 'Get' aligns with read-only, but it doesn't disclose additional behaviors like rate limits, error handling, or response format. With annotations providing safety profile, a baseline 3 is appropriate for the limited added value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words, directly stating the tool's purpose. It's appropriately sized for a simple read operation and front-loaded with the core action, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (a read-only query), rich annotations (covering safety and idempotency), and full schema coverage, the description is minimally adequate. However, it lacks output schema, and the description doesn't hint at return values or error cases, leaving gaps that could hinder agent usage in edge scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for all parameters (currency_pair, limit, with_id, interval). The description mentions 'currency pair' but adds no further semantic details beyond what the schema provides, such as format examples or default behaviors. Baseline 3 is correct when the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get the order book for a currency pair' clearly states the verb ('Get') and resource ('order book'), specifying it's for spot trading (implied by 'spot' in the tool name). However, it doesn't explicitly differentiate from sibling tools like 'cex_dc_list_dc_order_book' or 'cex_fx_get_fx_order_book', which serve similar purposes for different markets (DC and FX).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention the spot market context, prerequisites like authentication, or compare it to other order book tools in the sibling list (e.g., for DC or FX markets), leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_spot_get_spot_tickersA
Read-onlyIdempotent
Inspect

Get ticker information for one or all currency pairs

ParametersJSON Schema
NameRequiredDescriptionDefault
timezoneNoTimezone, e.g. Asia/Shanghai. Affects the time range of statistics
currency_pairNoCurrency pair name, e.g. BTC_USDT. Returns all pairs if not specified
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or return format details. Since annotations handle core traits, a baseline 3 is appropriate, but no extra value is added.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without redundancy. It is front-loaded with the core action and scope, making it easy to parse quickly, with no wasted words or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 optional parameters, no output schema) and rich annotations covering read-only, open-world, and idempotent hints, the description is reasonably complete. It specifies the resource and scope, though it could benefit from mentioning the return type (e.g., ticker data fields) since no output schema exists. Overall, it provides adequate context for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters (timezone and currency_pair). The description mentions 'for one or all currency pairs', aligning with the currency_pair parameter's behavior, but adds no semantic details beyond what the schema provides. With high schema coverage, the baseline 3 is met without additional parameter insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'ticker information' with scope 'for one or all currency pairs', making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'cex_spot_get_spot_candlesticks' or 'cex_spot_get_spot_trades' which also retrieve spot market data but for different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'for one or all currency pairs', suggesting it can be used for specific or general queries. However, it lacks explicit guidance on when to choose this tool over alternatives (e.g., vs. 'cex_spot_get_spot_candlesticks' for historical data) or any prerequisites, leaving usage context somewhat inferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_spot_get_spot_tradesB
Read-onlyIdempotent
Inspect

Get recent trades for a currency pair

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd timestamp in seconds, defaults to current time if not specified
fromNoStart timestamp in seconds
pageNoPage number
limitNoMaximum number of records to return, default 100
last_idNoUse the last record ID from previous list as starting point for next list
reverseNoWhether to retrieve records less than last_id. Default returns records greater than last_id
currency_pairYesCurrency pair
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and open-world, so the agent knows it's a safe, repeatable query. The description adds minimal behavioral context beyond this - it implies 'recent' trades but doesn't specify default time ranges, rate limits, authentication needs, or pagination behavior. With annotations covering safety, a 3 is appropriate as the description adds some value but not rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a straightforward query tool and gets straight to the point without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (7 parameters, no output schema) and rich annotations, the description is minimally adequate. It states what the tool does but lacks context about market type (spot vs derivatives), response format, or differentiation from similar tools. The annotations provide safety information, but the description doesn't compensate for the missing output schema or sibling differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all 7 parameters are well-documented in the schema itself. The description mentions 'recent trades' which hints at time-based filtering (aligning with 'from' and 'to' parameters) but doesn't add meaningful semantic context beyond what the schema already provides. Baseline 3 is correct when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('recent trades for a currency pair'), making the purpose specific and understandable. However, it doesn't distinguish this tool from sibling tools like 'cex_dc_list_dc_trades' or 'cex_fx_get_fx_trades', which appear to serve similar functions in different contexts (DC/FX vs spot markets).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple trade-related tools in the sibling list (e.g., cex_dc_list_dc_trades, cex_fx_get_fx_trades, cex_options_list_options_trades), there's no indication of which market context this applies to (spot vs derivatives) or any prerequisites for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_spot_list_currenciesA
Read-onlyIdempotent
Inspect

List all currencies supported

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and open-world, covering key behavioral traits. The description adds no additional context about rate limits, authentication needs, or response format. However, since annotations provide comprehensive safety information, the description meets the lower bar without contradicting them, earning a baseline score for not undermining structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it immediately scannable and easy for an agent to parse without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, annotations covering safety), the description is adequate but minimal. It lacks output details (no schema provided) and doesn't clarify scope relative to siblings, leaving gaps in full contextual understanding. For a list operation with rich annotations, it meets the minimum viable threshold but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the input schema fully documents the lack of inputs. The description doesn't need to compensate, as there are no parameters to explain. It appropriately doesn't mention parameters, aligning with the schema's completeness for this simple case.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('currencies supported'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'cex_spot_list_currency_pairs' or 'cex_alpha_list_alpha_currencies', which would require specifying what makes this currency list distinct (e.g., spot trading vs. alpha currencies).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple currency-related tools in the sibling list (e.g., 'cex_spot_list_currency_pairs', 'cex_alpha_list_alpha_currencies'), there's no indication of context, prerequisites, or differentiation, leaving the agent to guess based on naming patterns alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_spot_list_currency_pairsB
Read-onlyIdempotent
Inspect

List all available spot trading pairs

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds minimal context by implying it returns all pairs without filtering, but doesn't detail format, pagination, or rate limits. It doesn't contradict annotations, so a baseline 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('List all available spot trading pairs') with zero wasted words. It's appropriately sized for a simple list tool with no parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 params, no output schema) and rich annotations, the description is minimally adequate. However, it lacks context on output format or how to interpret results, which would be helpful despite annotations covering safety. It's complete enough for basic use but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the input schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it correctly implies no filtering parameters by stating 'all available', aligning with the schema. Baseline for 0 params is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all available spot trading pairs'), making the purpose unambiguous. However, it doesn't differentiate from sibling tools like 'cex_spot_list_currencies' or 'cex_spot_get_currency_pair', which also list spot-related data, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools listing various spot, options, and other trading data, there's no indication of context, prerequisites, or exclusions for selecting this specific list tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cex_square_list_live_replayB
Read-onlyIdempotent
Inspect

List live stream replays from Gate Square

ParametersJSON Schema
NameRequiredDescriptionDefault
tagNoBusiness type filter: Market Analysis, Hot Topics, Blockchain, Others
coinNoCurrency name filter (e.g. BTC, ETH)
sortNoSort order: hot=hottest (default), new=newest
limitNoNumber of results, 1~10, default 3
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already provide excellent behavioral coverage (read-only, open-world, idempotent, non-destructive), so the description doesn't need to repeat these. The description adds minimal context by specifying the source ('from Gate Square'), but doesn't elaborate on rate limits, authentication needs, or what 'live stream replays' actually means in this context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states exactly what the tool does without any wasted words. It's perfectly front-loaded with the core functionality, making it immediately understandable to an agent scanning multiple tool descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations (covering safety and behavior) and comprehensive parameter documentation in the schema, the description provides adequate context for a read-only list operation. However, without an output schema, the description could ideally mention what format the replays are returned in or what fields to expect, though this isn't strictly required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, all four parameters are well-documented in the input schema itself. The description adds no additional parameter information beyond what's already in the schema, so it meets the baseline expectation but doesn't provide extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('live stream replays from Gate Square'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its many sibling list tools (like cex_square_list_square_ai_search), which all follow similar naming patterns but target different data types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With 50+ sibling tools, many of which are also list operations for different data types, the agent receives no help in selecting this specific tool for 'live stream replays' over other list tools for different resources.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.