Skip to main content
Glama

Gate DEX MCP

Server Details

Gate DEX MCP for wallet auth, transfers, swaps, token info, market data, and RPC access.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
gate/gate-mcp
GitHub Stars
27

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 47 of 47 tools scored. Lowest: 3.1/5.

Server CoherenceB
Disambiguation2/5

Multiple tools have overlapping purposes, causing significant ambiguity. For example, dex_auth_gate_login_poll, dex_auth_login_gate_wallet, and dex_auth_gate_login_start all handle Gate OAuth with unclear boundaries, and dex_tx_list, dex_tx_swap_history_list, and dex_tx_detail overlap in transaction history retrieval. The descriptions provide some guidance, but the high tool count and similar naming make it easy for an agent to misselect tools.

Naming Consistency4/5

Tool names follow a consistent snake_case pattern with a clear prefix structure (e.g., dex_auth_, dex_market_, dex_token_, dex_tx_, dex_wallet_), which aids readability. There are minor deviations, such as dex_tx_get_sol_unsigned using 'get' inconsistently compared to others like dex_tx_transfer_preview, but overall the naming is predictable and well-organized.

Tool Count2/5

With 47 tools, the count is excessive for a DEX server, leading to a heavy and overwhelming interface. This many tools suggests poor scoping, as many functions could be consolidated (e.g., multiple auth and transaction tools). A more focused set of 10-20 tools would better serve the domain without sacrificing functionality.

Completeness5/5

The tool set provides comprehensive coverage for a DEX domain, including authentication, market data, token management, transaction handling (transfers, swaps, approvals), wallet operations, and cross-chain functionality. There are no obvious gaps; it supports full CRUD/lifecycle operations from login to trading and portfolio management, ensuring agents can handle end-to-end workflows without dead ends.

Available Tools

47 tools
dex_agentic_reportB
Destructive
Inspect

Register agentic wallet addresses with the wallet service.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletsYesArray of wallet info objects. Each wallet contains walletID and accounts array. Each account contains accountID and chainAddressList array. Each chainAddress contains networkKey, chains, chainAddress, and optional accountKey/accountFormat.
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a destructive, non-idempotent, non-read-only, open-world operation. The description adds value by specifying it's a registration action, implying creation/mutation, which aligns with annotations. However, it doesn't elaborate on what 'destructive' entails (e.g., overwriting existing data) or mention rate limits, error conditions, or side effects beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with zero wasted words. It's front-loaded with the core action and resource, making it easy to scan and understand quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema, annotations cover key behavioral traits, and schema fully describes parameters, the description is minimally adequate. However, it lacks context on typical workflows (e.g., use after authentication), expected outcomes, or error handling, leaving gaps for an agent to operate effectively in this complex wallet service environment.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description adds no additional meaning about parameters beyond implying 'wallets' are for registration. This meets the baseline of 3, as the schema carries the full burden of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Register') and resource ('agentic wallet addresses with the wallet service'), providing a specific purpose. However, it doesn't explicitly differentiate from sibling tools like 'dex_wallet_get_addresses' or 'dex_wallet_bind_exchange_uid', which also deal with wallet addresses, leaving some ambiguity about when to choose this tool over others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., requiring authentication via sibling tools like dex_auth_google_login_poll), exclusions, or typical use cases. The agent must infer usage from the tool name and schema alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_auth_gate_login_pollA
Destructive
Inspect

Poll Gate OAuth; returns pending/ok/error. [write] On ok: replace mcp_token unless linked_to_google_session is true. Pass auto_replace_binding=true to rebind exchange UID on new Gate session.

ParametersJSON Schema
NameRequiredDescriptionDefault
flow_idYesFlow ID returned by dex_auth_gate_login_start
auto_replace_bindingNoIf true and status is ok: when new session walletType is not gate_mcp/gate_quick, call BW POST /v1/wallet/inner/rebind (same as dex_wallet_replace_binding). Gate-native sessions are blocked without calling BW.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations. Annotations indicate it's destructive (destructiveHint: true) and not read-only/idempotent, but the description clarifies what gets destroyed/replaced ('replace mcp_token') and under what conditions ('unless linked_to_google_session is true'). It also explains the effect of 'auto_replace_binding=true' and mentions blocking behavior for Gate-native sessions. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences. The first sentence states the core purpose and return states. The second sentence adds important behavioral details and parameter guidance. While dense, every part earns its place by providing essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (OAuth polling with token replacement logic), the description provides good context. It explains the polling mechanism, return states, token replacement conditions, and parameter effects. While there's no output schema, the description mentions possible return values (pending/ok/error). The main gap is lack of explicit error handling details, but overall it's reasonably complete for this authentication tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing good documentation for both parameters. The description adds some context for 'auto_replace_binding' by explaining its effect ('rebind exchange UID on new Gate session') and referencing another tool ('dex_wallet_replace_binding'), but doesn't add significant meaning beyond the schema's detailed descriptions. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Poll Gate OAuth; returns pending/ok/error.' It specifies the action (polling) and the resource (Gate OAuth), and indicates the possible return states. However, it doesn't explicitly differentiate from its sibling 'dex_auth_google_login_poll' beyond the 'Gate' vs 'Google' naming, which is somewhat implied but not explicitly stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: it's for polling OAuth status after starting a login with 'dex_auth_gate_login_start' (implied by the flow_id parameter). It mentions an alternative action ('replace mcp_token') and a specific parameter condition ('auto_replace_binding=true'). However, it doesn't explicitly state when NOT to use it or compare it to other auth tools like 'dex_auth_google_login_poll'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_auth_gate_login_startA
Destructive
Inspect

Start Gate device OAuth; returns verification_url. Pass link_mcp_token on a Google session to bind Gate UID without replacing mcp_token.

ParametersJSON Schema
NameRequiredDescriptionDefault
link_mcp_tokenNoCurrent Google wallet mcp_token for bind-only Gate OAuth; omit for standalone Gate login
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds some behavioral context beyond annotations: it mentions that the tool returns a verification_url and explains the binding logic for link_mcp_token. However, annotations already provide critical information (destructiveHint: true, readOnlyHint: false, etc.), so the bar is lower. The description doesn't elaborate on what 'destructive' means in this context (e.g., does it invalidate existing sessions?) or discuss rate limits/errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: the first clause states the core purpose, and the second explains parameter usage. Every sentence earns its place with no wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (OAuth flow with binding logic), lack of output schema, and rich annotations, the description is somewhat complete but has gaps. It explains the return value (verification_url) and parameter usage, but doesn't describe error cases, what happens after verification, or how this integrates with the poll sibling. For a destructive authentication tool, more behavioral context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the single parameter link_mcp_token. The description adds marginal value by explaining when to use this parameter (for binding vs. standalone login), but doesn't provide additional syntax or format details beyond what's in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Start Gate device OAuth; returns verification_url.' It specifies the action (start OAuth), the resource (Gate device), and the output (verification_url). However, it doesn't explicitly distinguish this tool from its sibling 'dex_auth_gate_login_poll' or 'dex_auth_google_login_start', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use the optional parameter: 'Pass link_mcp_token on a Google session to bind Gate UID without replacing mcp_token; omit for standalone Gate login.' This gives explicit guidance on parameter usage scenarios. However, it doesn't mention when to choose this tool over alternatives like 'dex_auth_login_gate_wallet' or 'dex_wallet_google_gate_bind_start', which are related sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_auth_google_login_pollA
Destructive
Inspect

Poll Google OAuth flow; returns pending/ok/error. [write] On ok: immediately replace stored mcp_token with login_result.mcp_token — new token is tied to a different account.

ParametersJSON Schema
NameRequiredDescriptionDefault
flow_idYesFlow ID returned by dex_auth_google_login_start
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it discloses the return states (pending/ok/error), the destructive action of replacing stored tokens on success, and that the new token is tied to a different account. This complements annotations like destructiveHint=true by specifying what gets destroyed and under what conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by critical behavioral details in a compact two-sentence format. Every sentence adds essential information without redundancy, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (authentication flow with destructive actions) and lack of output schema, the description provides strong context on behavior and outcomes. However, it could be more complete by detailing error states or prerequisites more explicitly, though annotations cover some aspects like destructiveHint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the flow_id parameter. The description does not add further meaning or details about this parameter beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Poll') and resource ('Google OAuth flow'), and distinguishes it from sibling tools like dex_auth_google_login_start by focusing on the polling phase rather than initiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by referencing 'flow_id returned by dex_auth_google_login_start', indicating when to use this tool (after starting the flow). However, it does not explicitly state when not to use it or name alternatives for different scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_auth_google_login_startB
Destructive
Inspect

Returns browser verification URL for Google device OAuth.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a non-read-only operation with potential side effects. The description adds value by specifying it returns a URL for OAuth, which implies user interaction and browser redirection, but does not detail rate limits, auth needs, or what gets destroyed (e.g., session state). It does not contradict annotations, so it earns a baseline score for adding some context beyond the structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the key action ('Returns'), making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (authentication flow with potential side effects), annotations cover safety aspects, but there is no output schema. The description mentions the return type (URL) but lacks details on response format, error handling, or next steps (e.g., use with polling tools). It is minimally adequate but has clear gaps for a tool in a security-sensitive context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, earning a high score as it avoids redundancy. A perfect 5 is reserved for cases where the description adds extra semantic clarity, which is not applicable here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Returns') and the resource ('browser verification URL for Google device OAuth'), making the purpose specific and understandable. However, it does not explicitly differentiate from sibling tools like 'dex_auth_google_login_poll' or 'dex_auth_gate_login_start', which might handle similar authentication flows, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other login-related tools in the sibling list (e.g., 'dex_auth_google_login_poll' for polling status or 'dex_auth_gate_login_start' for different authentication methods). It lacks explicit context or exclusions, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_auth_login_gate_walletA
Destructive
Inspect

Exchange Gate OAuth code for MCP token. [write] On success, immediately replace any stored mcp_token — new token is tied to a different account.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesGate OAuth authorization code
redirect_urlYesGate OAuth redirect URL used in the authorization request
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies that on success, it 'immediately replace[s] any stored mcp_token' and that the 'new token is tied to a different account.' This clarifies the destructive nature (matching destructiveHint=true) and adds important side effects. However, it doesn't mention error handling or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two sentences) and front-loaded with the core purpose. Every word earns its place: the first sentence states the action, and the second adds critical behavioral context. There's no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (authentication with side effects), annotations cover safety aspects (destructive, non-idempotent), and the description adds key behavioral details. However, without an output schema, it doesn't describe the return value format (e.g., token structure or error responses), leaving a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters (code and redirect_url). The description doesn't add any parameter-specific details beyond what's in the schema, so it meets the baseline of 3 for adequate coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Exchange Gate OAuth code for MCP token') and distinguishes it from siblings like dex_auth_gate_login_start (which initiates OAuth) and dex_auth_logout (which ends sessions). It precisely identifies both the input (OAuth code) and output (MCP token).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: after obtaining an OAuth code from Gate, as part of an OAuth flow. It distinguishes from siblings by specifying it's for Gate (not Google) authentication and for exchanging codes (not starting or polling). The context implies it should not be used without a valid code.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_auth_login_google_walletA
Destructive
Inspect

Exchange Google OAuth code for MCP token. [write] On success, immediately replace any stored mcp_token — new token is tied to a different account.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesGoogle OAuth authorization code
redirect_urlYesGoogle OAuth redirect URL used in the authorization request
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a destructive, non-idempotent write operation. The description adds valuable context beyond annotations: it specifies that success 'immediately replace[s] any stored mcp_token' and clarifies the account-binding consequence ('new token is tied to a different account'). This provides important behavioral details about token replacement and account switching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly focused sentences with zero waste. The first states the core purpose, the second explains the critical behavioral consequence. Every word serves a clear function, and the most important information (token replacement) is front-loaded in the second sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an authentication tool with comprehensive annotations and full schema coverage, the description provides sufficient context about the token exchange process and its consequences. The main gap is the lack of output schema, so the agent doesn't know what the response format looks like, but the description covers the essential behavioral impact well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents both parameters (code and redirect_url). The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline expectation without adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Exchange Google OAuth code for MCP token') and resource (authentication token). It distinguishes itself from siblings like dex_auth_logout (which ends sessions) and dex_auth_google_login_start/poll (which initiate OAuth flows) by focusing on the token exchange step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (after receiving an OAuth code from Google) but doesn't explicitly state when to use this tool versus alternatives like dex_auth_login_gate_wallet or dex_auth_gate_login_poll. It provides clear behavioral consequences ('immediately replace any stored mcp_token') which informs usage timing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_auth_logoutA
Destructive
Inspect

Revoke the MCP session identified by mcp_token.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenYesMCP session token to revoke
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true, readOnlyHint=false, and idempotentHint=false, which the description aligns with by implying a destructive action ('revoke'). It adds context by specifying the session token parameter, though it does not detail side effects like session termination consequences or error handling. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, direct sentence that front-loads the key action ('revoke') and resource, with no unnecessary words. It efficiently conveys the tool's purpose without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple destructive operation), annotations cover safety and behavior well, and the schema fully documents the single parameter. The description provides enough context for basic use, though it lacks output details (no output schema) and could mention effects like session invalidation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'mcp_token' fully documented in the schema as 'MCP session token to revoke'. The description repeats this information without adding extra meaning (e.g., token format or source), so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('revoke') and the target resource ('MCP session identified by mcp_token'), distinguishing it from sibling tools that handle authentication (e.g., login, poll) or other operations. It uses precise language that directly communicates the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for ending an MCP session, but it does not explicitly state when to use this tool versus alternatives (e.g., other logout methods or session management tools) or provide context about prerequisites. It offers basic guidance but lacks detailed exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_chain_configA
Destructive
Inspect

Returns chain config (networkKey, accountKey, chainID). Use the returned chain value in all subsequent transfer/sign/broadcast tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain name or alias (matching is case-insensitive). Use the returned `chain` value for all subsequent tools on that network.
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a write operation, but the description says 'Returns chain config' which implies a read operation. However, the description adds valuable context about authentication requirements ('Use the returned chain value in all subsequent tools') that annotations don't provide. The partial contradiction lowers the score from what could be a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first states the purpose, the second provides crucial usage guidance. Every word earns its place, and the most important information (what it returns and how to use it) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a configuration retrieval tool with good annotations and full schema coverage, the description provides excellent workflow context. The main gap is the lack of output schema, but the description specifies what values are returned (networkKey, accountKey, chainID), which partially compensates. The authentication requirement is clearly documented in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description doesn't add parameter details beyond what's in the schema, but it reinforces the importance of the 'chain' parameter by stating 'Use the returned chain value for all subsequent tools,' which provides workflow context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns chain config (networkKey, accountKey, chainID)' - a specific verb ('Returns') with precise resources listed. It distinguishes itself from sibling tools by focusing on configuration retrieval rather than authentication, trading, or wallet operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use the returned chain value in all subsequent transfer/sign/broadcast tools.' This tells the agent exactly when to use this tool (as a prerequisite for other operations) and how to use its output, creating clear workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_market_get_klineA
Destructive
Inspect

OHLCV candles by period and window. Volume aggregates→get_tx_stats. Pool events→get_pair_liquidity.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain name. Supported: bsc, base, eth, solana, arbitrum, polygon, avalanche, fantom, zksync, linea, optimism, sui, blast, tron, merlin, world, eni, bera, gatelayer, ton
limitNoMax number of candles (default 100, max 1000)
periodYesK-line period: seconds (e.g. 300, 3600) or 1m, 5m, 1h, 4h, 1d
end_timeNoEnd time (unix seconds); default: now
start_timeNoStart time (unix seconds); default: 100 periods before end_time
pair_addressNoOptional pair address
token_addressYesToken contract address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it clarifies what data is returned (OHLCV candles) and distinguishes it from other data types. While annotations indicate destructiveHint=true (potentially confusing for a data retrieval tool), the description doesn't contradict them but provides useful behavioral context about data scope and alternatives.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: the first phrase states the core purpose, followed by two clear alternative recommendations. Every sentence earns its place with zero wasted words, making it easy for an agent to quickly understand the tool's scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema, destructiveHint annotation), the description provides good contextual completeness. It clearly defines the tool's scope and alternatives, though it doesn't explain the destructiveHint=true annotation or provide details about return format. For a data retrieval tool with good sibling differentiation, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 7 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, mentioning 'period and window' which aligns with the schema's period/start_time/end_time parameters. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs and resources: 'OHLCV candles by period and window' indicates it retrieves price/volume data for a specific timeframe. It distinguishes from siblings by explicitly mentioning alternatives: 'Volume aggregates→get_tx_stats. Pool events→get_pair_liquidity.' This prevents confusion with related market data tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: 'Volume aggregates→get_tx_stats. Pool events→get_pair_liquidity.' This clearly delineates the scope (OHLCV candles) from volume statistics and liquidity events, helping the agent select the correct tool for different data needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_market_get_pair_liquidityB
Destructive
Inspect

AMM pool deposit and withdrawal events. Candlesticks→get_kline. Volume stats→get_tx_stats.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain name. Supported: bsc, base, eth, solana, arbitrum, polygon, avalanche, fantom, zksync, linea, optimism, sui, blast, tron, merlin, world, eni, bera, gatelayer, ton
page_sizeNoPage size (default 15, max 15)
page_indexNoPage index (default 1)
pair_addressNoOptional pair address
token_addressYesToken contract address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a non-read-only, potentially mutating operation, but the description doesn't clarify what gets destroyed or modified. It adds some context by referencing events and alternatives, but doesn't disclose rate limits, auth needs, or specific behavioral traits beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with three short phrases, front-loaded with the main purpose. Every sentence earns its place by stating the tool's focus and directing to alternatives, with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, destructive hint, no output schema), the description is minimal. It covers basic purpose and alternatives but lacks details on return values, error handling, or behavioral implications. With annotations providing some safety context, it's adequate but has clear gaps for a tool with potential destructive behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional meaning about parameters beyond implying token_address is central for events. Baseline 3 is appropriate as the schema carries the burden, but the description doesn't compensate with extra insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'AMM pool deposit and withdrawal events' states a purpose but is vague—it doesn't specify if this tool retrieves, logs, or processes these events, and it doesn't clearly distinguish from siblings like dex_market_get_kline or dex_market_get_tx_stats. The mention of alternatives (Candlesticks→get_kline. Volume stats→get_tx_stats) hints at differentiation but doesn't explicitly define this tool's unique function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by naming alternatives (get_kline for candlesticks, get_tx_stats for volume stats), implying this tool is for deposit/withdrawal events. However, it lacks explicit when-not-to-use guidance or prerequisites, such as when token_address is required versus optional pair_address.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_market_get_tx_statsA
Destructive
Inspect

Per-interval buy/sell volume and tx counts (5m–24h). Candlesticks→get_kline. Pool events→get_pair_liquidity.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain name. Supported: bsc, base, eth, solana, arbitrum, polygon, avalanche, fantom, zksync, linea, optimism, sui, blast, tron, merlin, world, eni, bera, gatelayer, ton
pair_addressNoOptional pair address
token_addressYesToken contract address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations by specifying the time interval range (5m–24h) and clarifying what data is returned (buy/sell volume and transaction counts). While annotations provide safety information (destructiveHint: true, readOnlyHint: false), the description adds operational context about the tool's scope and output format, though it doesn't mention rate limits or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured with just two sentences. The first sentence states the core purpose, and the second provides clear differentiation from sibling tools. Every word serves a purpose with zero wasted information, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema), the description provides good contextual completeness. It clearly states what the tool does, differentiates it from alternatives, and specifies the data returned. However, without an output schema, it could benefit from more detail about the structure of returned statistics, though the annotations provide important safety context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents all parameters. The description doesn't add any parameter-specific information beyond what's in the schema. It doesn't explain relationships between parameters or provide usage examples, so it meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving 'Per-interval buy/sell volume and tx counts' with specific time ranges (5m–24h). It distinguishes from sibling tools by explicitly naming alternatives: 'Candlesticks→get_kline. Pool events→get_pair_liquidity.' This provides a specific verb (get), resource (tx stats), and scope (time intervals), effectively differentiating it from related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines by stating when NOT to use this tool and naming specific alternatives: 'Candlesticks→get_kline. Pool events→get_pair_liquidity.' This gives clear guidance on tool selection, indicating this tool is for transaction statistics while other tools handle different types of market data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_rpc_callB
Destructive
Inspect

Proxy a JSON-RPC call to a blockchain node.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain name (e.g., ETH, BSC, SOL, ARB, POLYGON)
methodYesRPC method name (e.g., eth_blockNumber, eth_getBalance, getLatestBlockhash)
paramsNoRPC method parameters as JSON array
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds minimal behavioral context beyond annotations. Annotations already indicate this is a non-readOnly, openWorld, non-idempotent, destructive operation. The description only states it proxies calls, which aligns with annotations but doesn't elaborate on rate limits, error handling, or specific blockchain behaviors. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core functionality, making it easy to understand at a glance, and every word serves a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of proxying RPC calls with destructive potential and no output schema, the description is minimally adequate. It covers the basic purpose but lacks details on return formats, error cases, or chain-specific behaviors. With annotations providing safety hints and schema covering parameters, it's functional but could be more informative for such a powerful tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all four parameters (chain, method, params, mcp_token). The description adds no additional parameter semantics beyond what's in the schema, such as example chains or methods beyond those listed, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function as 'Proxy a JSON-RPC call to a blockchain node,' specifying both the action (proxy) and target resource (blockchain node). It distinguishes itself from sibling tools that handle authentication, market data, token operations, and transactions rather than direct RPC calls. However, it doesn't explicitly differentiate from all siblings in a single statement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it's clear this is for JSON-RPC calls, there's no mention of when to choose this over other blockchain interaction tools in the sibling list, nor any prerequisites beyond what's implied in the parameter descriptions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_token_get_coin_infoA
Destructive
Inspect

Price, market snapshot, metadata, optional top holders. Security audit→get_risk_info.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesBlockchain network
addressYesToken contract address
include_holdersNoInclude top holders (true/false, default true)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a write operation, but the description implies a read-only data retrieval function, creating a contradiction. The description adds minimal behavioral context beyond annotations, such as optional top holders, but fails to clarify the destructive nature or other traits like idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, using a single sentence with a fragment to list data points and reference another tool. It is front-loaded with key information and has no wasted words, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity with destructive annotations and no output schema, the description is incomplete. It covers basic data retrieval but lacks details on behavioral implications, response format, or error handling, leaving gaps for an AI agent to understand full usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters like 'chain' and 'address'. The description mentions 'optional top holders', which aligns with the 'include_holders' parameter but doesn't add significant meaning beyond the schema, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves price, market snapshot, metadata, and optional top holders for a token, which is specific and actionable. However, it doesn't explicitly distinguish itself from sibling tools like 'dex_token_get_risk_info' beyond mentioning it for security audit details, leaving some ambiguity in differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage by specifying what data is retrieved and referencing 'get_risk_info' for security audits, which helps guide when to use this tool. However, it lacks explicit exclusions or alternatives among siblings like 'dex_token_get_coins_range_by_created_at', reducing the score slightly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_token_get_coins_range_by_created_atB
Destructive
Inspect

Tokens launched within a given time window; chain optional for all networks.

ParametersJSON Schema
NameRequiredDescriptionDefault
endYesEnd of creation time range (RFC3339, e.g. 2025-01-02T00:00:00Z)
chainNoOptional chain filter; omit for all chains
limitNoMax number of tokens (1-100, default 20)
startYesStart of creation time range (RFC3339, e.g. 2025-01-01T00:00:00Z)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a destructive, non-idempotent, open-world operation that is not read-only. The description doesn't contradict these annotations and adds context by specifying the time-based filtering and optional chain parameter. However, it doesn't elaborate on what 'destructive' means in this context or potential side effects beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that clearly states the tool's function and key optional parameter. It's front-loaded with the main purpose and includes essential qualification without unnecessary elaboration. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, 100% schema coverage, and comprehensive annotations, the description is adequate but minimal. It covers the basic purpose but lacks output format explanation (no output schema provided), doesn't address error conditions, and doesn't provide usage examples. Given the complexity and missing output schema, more context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal value beyond the schema by mentioning the time window and optional chain filter, but doesn't provide additional semantic context about parameter interactions or usage patterns. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool retrieves tokens launched within a time window with optional chain filtering, which is a clear purpose. However, it doesn't differentiate from sibling tools like dex_token_get_coin_info or dex_token_ranking, which also retrieve token information. The description is specific about the filtering criteria but lacks sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, compare to sibling tools, or specify scenarios where this tool is preferred over others like dex_token_list_swap_tokens or dex_token_get_risk_info. Usage context is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_token_get_risk_infoB
Destructive
Inspect

Contract security audit (honeypot, tax, ownership flags). Price and market stats→get_coin_info.

ParametersJSON Schema
NameRequiredDescriptionDefault
lanNoLanguage code (e.g. en, zh)
chainYesChain name. Supported: bsc, polygon, avalanche, arbitrum, optimism, base, fantom, linea, zksync, world, gatelayer, merlin, blast, eni, bera, solana, ton, sui, tron
ignoreNoSet to true to hide empty risk items
addressYesToken contract address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a destructive, non-idempotent, open-world operation with read/write capabilities. The description adds valuable context about what the tool actually does (security audit with specific risk flags) that isn't covered by annotations. It doesn't contradict annotations, and while it doesn't detail rate limits or auth needs, it provides meaningful behavioral insight beyond the structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and to the point, consisting of two concise phrases. The first phrase states the core function, and the second provides a minimal usage note. There's no wasted verbiage, though the structure could be slightly more polished with clearer separation between purpose and guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a security audit tool with destructive annotations and no output schema, the description is adequate but incomplete. It explains what the tool does but doesn't cover response format, error conditions, or detailed behavioral implications. The annotations help, but for a tool with destructiveHint: true, more context about what 'destructive' means in this context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 4 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3. It doesn't compensate for gaps because there are none, but also doesn't enhance understanding of parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool performs a 'contract security audit' with specific risk flags (honeypot, tax, ownership), which is a clear purpose. However, it doesn't distinguish this from sibling tools like 'dex_token_get_coin_info' beyond mentioning that tool handles 'price and market stats' - this differentiation is present but not explicit about when to choose one over the other.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance by mentioning 'price and market stats→get_coin_info,' which implies an alternative tool for different data. However, it lacks explicit when-to-use criteria, prerequisites, or context for choosing this security audit tool over other token-related tools in the extensive sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_token_list_cross_chain_bridge_tokensB
Destructive
Inspect

Receivable assets on target chain for a source token. Same-chain trading→list_swap_tokens.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesTarget chain identifier, e.g. bsc, arbitrum, polygon
searchNoSearch keyword (token symbol or contract address) to filter results
walletNoUser wallet address(es), comma-separated. Used to show balances and favorites.
account_idNoUser account ID. Only used when wallet is empty, to look up wallet addresses.
search_authNoWhen searching, only return certified tokens. Only effective when search is non-empty.
source_chainYesSource chain identifier, e.g. eth, bsc, solana
source_addressYesSource token contract address on the source chain
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide key hints: readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description doesn't contradict these but adds minimal behavioral context beyond the purpose—it doesn't explain what 'destructive' entails (e.g., data mutation or side effects), rate limits, or auth requirements. With annotations covering safety and idempotency, the bar is lower, but the description could add more operational insights. Thus, a 3—it doesn't detract but adds limited value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—just two short phrases that are front-loaded with the core purpose and usage hint. Every word earns its place without redundancy or fluff, making it easy to parse quickly. This is a model of efficiency, warranting a 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema, and annotations like destructiveHint=true), the description is minimal. It covers the purpose and a usage hint but lacks details on output format, error handling, or behavioral nuances implied by the annotations. With no output schema, more context on return values would be helpful. It's adequate for basic understanding but incomplete for full operational clarity, scoring a 3.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with all parameters well-documented in the input schema (e.g., source_chain, chain, search, wallet). The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining interactions between parameters. Given the high coverage, the baseline is 3, as the schema does the heavy lifting and the description doesn't compensate with extra semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the purpose as 'Receivable assets on target chain for a source token,' which clarifies it lists tokens available for bridging from a source token. However, it's somewhat vague about the exact operation (e.g., whether it's a query or fetch), and while it mentions a sibling tool ('Same-chain trading→list_swap_tokens'), it doesn't explicitly differentiate this tool's unique scope beyond implying cross-chain vs. same-chain. This lands it at a 3—clear but not fully specific or distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by stating 'Same-chain trading→list_swap_tokens,' which implicitly guides usage: use this tool for cross-chain bridging scenarios and the sibling for same-chain trading. However, it lacks explicit when-not-to-use criteria or alternative tools beyond this one comparison, so it doesn't reach a full 5. This is a solid 4—good contextual guidance but not exhaustive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_token_list_swap_tokensB
Destructive
Inspect

Tradable assets on a chain; tag selects favorites or recommendations. Bridge targets→list_cross_chain_bridge_tokens.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagNoList type: empty for normal token list; 'favorite' for user favorites (requires wallet/account_id); 'recommend' for system recommended tokens.
chainNoChain identifier, e.g. eth, bsc, solana, arbitrum. Omit to query all chains.
searchNoSearch keyword (token symbol or contract address)
walletNoUser wallet address(es), comma-separated. Used for favorites and balance display.
web3_keyNoChain web3_key identifier. Only used when chain is empty, auto-maps to chain.
account_idNoUser account ID. Only used when wallet is empty.
search_authNoWhen searching, only return certified tokens. Only effective when search is non-empty.
ignore_bridgeNoIgnore cross-chain bridge restrictions, only filter by chain. Default false.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a write operation, but the description doesn't explain what gets destroyed or modified, leaving a gap. It adds some context about tag usage and bridge alternatives, but doesn't fully disclose behavioral traits like rate limits or auth needs beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded, with two sentences that efficiently convey the core purpose and an alternative tool. There's no wasted text, but it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, destructive hint, no output schema), the description is moderately complete. It covers basic usage and alternatives but lacks details on return values, error handling, or specific behavioral impacts, leaving gaps for an AI agent to fully understand the tool's operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 8 parameters. The description adds minimal value by implying tag usage for favorites/recommendations and mentioning bridge targets, but doesn't provide additional syntax or format details beyond the schema. Baseline 3 is appropriate as the schema handles most of the parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool lists 'tradable assets on a chain' with tag filtering, which clarifies the verb (list) and resource (tokens). However, it doesn't distinguish from siblings like 'dex_token_list_cross_chain_bridge_tokens' beyond a brief mention, making the purpose somewhat vague in context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by specifying that 'tag selects favorites or recommendations' and mentions an alternative ('Bridge targets→list_cross_chain_bridge_tokens'), giving explicit guidance on when to use this tool versus a sibling. It lacks explicit exclusions or detailed prerequisites, but the context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_token_rankingA
Destructive
Inspect

Top movers by 24h price change; direction selects gainers vs losers. Single token→get_coin_info.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoOptional chain filter
top_nNoNumber of tokens (1-100, default 10)
directionYes'desc' for top gainers, 'asc' for top losers
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide key behavioral hints: readOnlyHint=false, destructiveHint=true, openWorldHint=true, idempotentHint=false. The description doesn't contradict these but adds minimal context beyond them—it mentions ranking functionality but doesn't explain what 'destructive' means here (e.g., data mutation or side effects) or address rate limits/auth needs. With annotations covering safety, it adds some value but lacks rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: two sentences with zero waste. The first sentence states the core purpose and key parameter usage; the second provides a clear alternative. Every word earns its place, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (ranking with filtering), rich annotations, and 100% schema coverage, the description is adequate but has gaps. It lacks output details (no output schema), doesn't explain destructive behavior hinted in annotations, and doesn't fully contextualize among siblings. It meets minimum viability but could be more complete for a tool with destructive potential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for all three parameters (chain, top_n, direction). The description adds marginal value by implying the tool's purpose relates to these parameters (e.g., 'direction selects gainers vs losers' aligns with schema), but doesn't provide additional syntax, format, or usage details beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Top movers by 24h price change; direction selects gainers vs losers.' It specifies the verb ('Top movers') and resource (tokens by price change), but doesn't explicitly differentiate from sibling tools like 'dex_token_get_coin_info' beyond mentioning it as an alternative for single tokens.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for ranking tokens by 24-hour price change, with direction to select gainers or losers. It explicitly mentions an alternative ('Single token→get_coin_info') for single-token queries, but doesn't specify when not to use it or compare with other siblings like 'dex_token_list_swap_tokens'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_approve_previewA
Destructive
Inspect

Build ERC20/SPL approve or revoke tx (does not broadcast). Sign with dex_wallet_sign_transaction after user confirmation.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain name (default ETH). Based on backend chain configuration.
nonceNoEVM: transaction nonce. Omit to auto-fetch current nonce.
ownerYesOwner address (your wallet address)
actionNoAction type: 'approve' (default) or 'revoke' (revoke all approvals, Solana only)
amountYesApproval amount (human-readable). Set to '0' to revoke approval (EVM), set to 'unlimited' or 'max' for maximum allowance
spenderYesSpender address (EVM: spender contract address; Solana: delegate account address)
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
token_mintNoSolana: SPL Token mint address (required for Solana)
token_contractNoEVM: ERC20 contract address (required for EVM)
token_decimalsYesToken decimals (e.g. 6, 18, 9)
max_fee_per_gasNoEVM: override max fee per gas (wei)
max_priority_fee_per_gasNoEVM: override max priority fee per gas (wei)
priority_fee_micro_lamportsNoSolana: priority fee (micro-lamports per compute unit)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by implying a transaction-building operation that could lead to financial changes. The description adds valuable context beyond annotations: it specifies the transaction is not broadcast (clarifying it's a preview step) and mentions the need for user confirmation before signing, which helps the agent understand the workflow and safety considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first sentence states the core purpose and key constraint (no broadcast), and the second provides essential next-step guidance. It is front-loaded with critical information and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's high complexity (13 parameters, destructive operation) and lack of output schema, the description is mostly complete: it explains the purpose, usage flow, and key behavioral traits. However, it could improve by briefly mentioning the output (e.g., an unsigned transaction object) or error handling, which would help the agent anticipate results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 13 parameters. The description does not add any parameter-specific details beyond what the schema provides, such as explaining interactions between parameters like 'action' and 'amount'. However, it meets the baseline of 3 since the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Build ERC20/SPL approve or revoke tx') and resource (transaction), distinguishing it from siblings like dex_tx_swap_preview or dex_tx_transfer_preview by focusing on token approval operations. It explicitly mentions the transaction is not broadcast, which further clarifies its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('does not broadcast') and what to do after ('Sign with dex_wallet_sign_transaction after user confirmation'), naming the alternative tool. It also implies usage context for building approval transactions before signing, with no misleading elements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_detailA
Destructive
Inspect

Fetch transaction detail by on-chain hash. → swap_detail for swap order detail by order ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
hash_idYesTransaction hash ID
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
from_walletNoOptional sender wallet address filter
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a destructive, non-idempotent, non-read-only operation with open-world data. The description adds no behavioral context beyond what annotations provide (no rate limits, auth requirements beyond schema, or what 'destructive' means here). However, it doesn't contradict annotations, so it meets the baseline.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two brief sentences) with zero wasted words. It's front-loaded with the core purpose and immediately provides the sibling differentiation. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, destructive operation) and lack of output schema, the description is reasonably complete for its purpose. It clearly states what the tool does and when to use it versus alternatives, though it could benefit from more behavioral context given the destructive annotation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are documented in the schema. The description adds no additional parameter semantics beyond what's already in the schema descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch transaction detail') and resource ('by on-chain hash'), and explicitly distinguishes it from its sibling 'swap_detail' which handles swap orders by order ID. This provides precise differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides an alternative tool ('swap_detail for swap order detail by order ID'), giving clear guidance on when to use this tool versus its sibling. This directly addresses the need for tool selection in context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_gasA
Destructive
Inspect

Estimate on-chain gas price and gas limit for a given tx.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesRecipient address
dataNoTransaction calldata (hex)
fromYesSender address
chainYesChain name or alias; resolved via dex_chain_config before calling upstream.
valueNoTransfer value in wei
networkNoNetwork name
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true, readOnlyHint=false, openWorldHint=true, and idempotentHint=false. The description doesn't contradict these but adds valuable context: it clarifies that this is an 'estimate' operation, implying it doesn't execute transactions, which aligns with the destructive hint (likely referring to resource usage or side effects). However, it misses details like rate limits, authentication requirements (implied by mcp_token in schema but not stated in description), or how estimates are computed (e.g., based on current network conditions).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence: 'Estimate on-chain gas price and gas limit for a given tx.' It's highly concise with zero wasted words, directly stating the tool's function. Every part of the sentence earns its place by specifying the action, target, and context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, destructive hint, no output schema), the description is minimally adequate. It states the purpose but lacks context on usage, behavioral nuances (e.g., authentication needs, error handling), and output details. With annotations covering safety and idempotency, and schema covering parameters, the description provides a basic overview but doesn't fully compensate for the absence of an output schema or detailed guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 7 parameters (e.g., 'chain' as 'Chain name or alias; resolved via dex_chain_config before calling upstream'). The description adds no additional parameter semantics beyond implying the tool uses these inputs for gas estimation. This meets the baseline of 3, as the schema handles the heavy lifting without description enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Estimate on-chain gas price and gas limit for a given tx.' It specifies the verb ('estimate') and resource ('gas price and gas limit'), and distinguishes it from sibling tools like dex_tx_send_raw_transaction or dex_tx_detail, which perform different operations. However, it doesn't explicitly differentiate from all siblings, such as dex_tx_approve_preview, which might also involve gas estimation in some contexts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing authentication via mcp_token), compare it to similar tools like dex_tx_swap_preview (which might include gas estimates), or specify scenarios where estimation is required (e.g., before submitting a transaction). This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_get_sol_unsignedA
Destructive
Inspect

Solana only: build fresh native SOL unsigned tx with latest blockhash; avoids Blockhash not found. Sign immediately. → transfer_preview for full summary first.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesRecipient address (base58)
fromYesSender address (base58)
amountYesAmount in SOL, e.g. 0.0001
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
priority_fee_micro_lamportsNoOptional priority fee (micro-lamports per compute unit). Try 1000-10000 if broadcast returns replacement transaction underpriced.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a destructive, non-idempotent write operation (readOnlyHint: false, destructiveHint: true). The description adds valuable context beyond annotations: it specifies 'Solana only,' mentions 'avoids Blockhash not found' (a technical constraint), and emphasizes 'Sign immediately' (a critical behavioral requirement). However, it doesn't detail rate limits or authentication specifics beyond what's in the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: it packs essential information into two sentences with zero wasted words. Every phrase ('Solana only,' 'build fresh...', 'avoids Blockhash not found,' 'Sign immediately,' '→ transfer_preview...') serves a clear purpose, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive transaction-building tool with no output schema, the description provides strong context: it clarifies the tool's scope (Solana-only, unsigned tx), warns about immediate signing, and references an alternative tool. However, it doesn't explain what the output looks like (e.g., raw transaction bytes) or error conditions, leaving some gaps given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description doesn't add any parameter-specific details beyond what's in the schema (e.g., it doesn't explain 'from'/'to' address formats or 'amount' precision). The baseline of 3 is appropriate since the schema carries the parameter documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('build fresh native SOL unsigned tx with latest blockhash'), the resource (SOL transaction), and distinguishes it from siblings by specifying 'Solana only' and contrasting with 'transfer_preview for full summary first.' This provides precise verb+resource differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when to use this tool ('Sign immediately') and when to use an alternative ('→ transfer_preview for full summary first'). It also implies usage context by mentioning 'avoids Blockhash not found' and the need for immediate signing, offering clear guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_listA
Destructive
Inspect

List wallet transaction history (transfers). → swap_history_list for swap/bridge orders. → tx_detail for single tx by hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
cursorNoPagination cursor
addressNoWallet address filter
end_timeNoEnd time (Unix seconds or ISO8601)
page_numNoPage number (1-based)
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
page_sizeNoPage size (default 20)
account_idYesAccount ID
start_timeNoStart time (Unix seconds or ISO8601)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description adds no behavioral context beyond what annotations already declare (e.g., no mention of authentication requirements, rate limits, or destructive consequences). However, it doesn't contradict annotations, so it meets the lower bar with annotations present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence with two clear alternatives) and front-loaded with the core purpose. Every element earns its place by providing essential differentiation without any wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, destructive hint) and lack of output schema, the description is reasonably complete for its purpose. It clearly defines scope and alternatives, though it could benefit from mentioning authentication requirements (implied by mcp_token parameter) or typical response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 8 parameters. The description adds no parameter-specific information beyond what's in the schema, which is acceptable given the high coverage. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('List') and resource ('wallet transaction history (transfers)'), and explicitly distinguishes it from sibling tools swap_history_list and tx_detail. This provides precise differentiation and avoids ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: use dex_tx_list for transfers, swap_history_list for swap/bridge orders, and tx_detail for single transactions by hash. This gives clear context for tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_send_raw_transactionA
Destructive
Inspect

Broadcast a signed tx to the chain. [write] Requires user confirmation after transfer preview and prior signing via dex_wallet_sign_transaction.

ParametersJSON Schema
NameRequiredDescriptionDefault
memoNoMemo (history_data.memo)
chainYesChain name: must match dex_chain_config response `chain` (canonical). Aliases are resolved via chain config before calling upstream.
nonceNoNonce (history_data.nonce)
addressNoSender wallet address (history_data.address)
networkNoNetwork name
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
signed_txYesSigned transaction data. For EVM chains: hex string (0x prefix added automatically). For other chains (e.g. Solana): use chain-specific format
account_idNoAccount ID (for history_data)
token_addrNoToken contract address (history_data.token_addr)
token_nameNoToken name (history_data.token_name)
trans_timeNoTransaction time ISO8601 (default: now)
trans_typeNoTransaction type (history_data.trans_type)
action_typeNoAction type (history_data.action_type)
trans_balanceNoDisplay amount (history_data.trans_balance)
trans_gas_feeNoGas fee display (history_data.trans_gas_fee)
token_short_nameNoToken symbol, e.g. USDT (history_data.token_short_name)
trans_balance_usdNoUSD value (history_data.trans_balance_usd)
trans_oppo_addressNoCounterparty address, e.g. recipient (history_data.trans_oppo_address)
trans_min_unit_amountNoAmount in smallest unit (history_data.trans_min_unit_amount)
trans_min_unit_gas_feeNoGas fee in smallest unit (history_data.trans_min_unit_gas_fee)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it mentions the need for user confirmation and prior signing, which are behavioral constraints not covered by the annotations (which only indicate destructive, non-idempotent, etc.). However, it doesn't detail potential side effects like network delays or confirmation times, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence) and front-loaded with the core purpose, followed by essential prerequisites. Every word earns its place, with no redundant information, making it easy for an agent to parse quickly and accurately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (destructive, 20 parameters, no output schema), the description is reasonably complete: it covers purpose, prerequisites, and key behavioral steps. However, it lacks details on error handling, response format, or confirmation outcomes, which could be helpful for a high-stakes transaction tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 20 parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema, such as explaining the relationship between chain and signed_tx formats. It meets the baseline for high schema coverage but doesn't enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Broadcast a signed tx') and resource ('to the chain'), distinguishing it from siblings like dex_tx_swap_submit or dex_tx_transfer_preview which handle different transaction types. It precisely defines the tool's core function without ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: after signing via dex_wallet_sign_transaction and user confirmation post-transfer preview. It also implies when not to use it (e.g., for unsigned transactions or other tx types handled by siblings), providing clear prerequisites and sequencing guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_swap_checkin_previewA
Destructive
Inspect

Returns check-in fields for a swap stage (approve or swap). Call after prepare and before terminal tx-checkin script.

ParametersJSON Schema
NameRequiredDescriptionDefault
stageYesPreview stage: 'approve' or 'swap'
mcp_tokenYesMCP session token (required in the tool schema). After Google or Gate wallet login, read it from login_result when dex_auth_google_login_poll or dex_auth_gate_login_poll returns success. The server checks this argument first, then the HTTP Authorization header (Bearer mcp_pat_...) — often the same string configured as headers.Authorization on your MCP server in local MCP client config. If there is no token yet, run auth start + poll first. If expired, try dex_auth_refresh_token; if that fails, log in again.
swap_session_idYesSwap session ID returned by dex_tx_swap_prepare
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds some behavioral context beyond annotations: it mentions the workflow sequence ('after prepare and before terminal tx-checkin script'), which is useful. However, annotations already provide key hints (destructiveHint: true, readOnlyHint: false, etc.), so the bar is lower. The description doesn't contradict annotations, but it could elaborate more on the destructive nature or other behavioral traits implied by the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured in two sentences. The first sentence states the purpose, and the second provides usage guidelines. Every sentence earns its place with no wasted words, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 required parameters, destructive operation) and lack of output schema, the description is moderately complete. It covers purpose and usage but doesn't explain what 'check-in fields' are or what the return values look like, which could be helpful for an agent. With annotations providing some context, it's adequate but has gaps in fully guiding the agent on what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents all parameters thoroughly. The description doesn't add any additional meaning or clarification about the parameters beyond what the schema provides. According to the rules, with high schema coverage, the baseline is 3 even with no param info in the description, which applies here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns check-in fields for a swap stage (approve or swap).' It specifies the verb ('returns'), resource ('check-in fields'), and scope ('swap stage'), making it easy to understand. However, it doesn't explicitly differentiate from sibling tools like 'dex_tx_approve_preview' or 'dex_tx_x402_checkin_preview', which appear to serve similar preview functions, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Call after prepare and before terminal tx-checkin script.' This clearly indicates when to use the tool in the workflow sequence. It also implies alternatives by specifying the stages ('approve' or 'swap'), helping distinguish use cases. No misleading information is present.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_swap_detailA
Destructive
Inspect

Fetch swap order detail by order ID from swap_submit. → tx_detail for detail by on-chain hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
tx_order_idYesTransaction order ID from dex_tx_swap_submit
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a write operation, but the description ('fetch') implies read-only behavior, creating a potential contradiction. However, the description adds value by specifying the data source ('from swap_submit') and contrasting with on-chain lookups, which annotations don't cover. No rate limits or auth details are mentioned, but annotations provide some behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two brief sentences with zero wasted words. It front-loads the core purpose and efficiently contrasts with an alternative tool, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (fetching order details), annotations provide some behavioral context (e.g., destructive hint), but there's no output schema to clarify return values. The description covers the purpose and hints at alternatives but lacks details on authentication requirements (implied by mcp_token) or error handling. It's minimally adequate but leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for both parameters (mcp_token and tx_order_id). The description mentions 'order ID' which aligns with tx_order_id but doesn't add meaning beyond the schema's details. The baseline score of 3 is appropriate since the schema fully documents parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch swap order detail by order ID from swap_submit.' It specifies the verb ('fetch'), resource ('swap order detail'), and key identifier ('order ID'). However, it doesn't explicitly distinguish this tool from its sibling 'dex_tx_detail', which handles on-chain hash-based lookups, though the arrow notation implies a relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by referencing 'swap_submit' as the source of order IDs and contrasting with 'tx_detail for detail by on-chain hash.' This suggests when to use this tool (for order ID lookups) versus an alternative (on-chain hash lookups), but it doesn't explicitly state when-not scenarios or prerequisites like authentication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_swap_history_listA
Destructive
Inspect

List swap/bridge order history with pagination; includes single-chain and cross-chain. → tx_list for transfer history.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_timeNoEnd time filter (ISO8601 format)
page_numNoPage number (0-based, default 0)
dst_chainNoDestination chain ID filter. 0 or omit for all chains
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
page_sizeNoPage size (default 20)
src_chainNoSource chain ID filter (e.g. 1=ETH, 56=BSC). 0 or omit for all chains
account_idYesAccount ID (wallet address)
start_timeNoStart time filter (ISO8601 format, e.g. 2025-01-01T00:00:00Z)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting a write operation, but the description describes a listing/read operation. However, the description adds valuable context about pagination and scope (single-chain vs cross-chain) that annotations don't cover. No direct contradiction since 'list' could involve destructive backend operations, but the mismatch is noted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence plus an arrow notation for sibling distinction) with zero wasted words. It's front-loaded with the core purpose and efficiently includes both scope and alternative guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 8 parameters, destructive annotations, and no output schema, the description adequately covers purpose and usage but lacks details about authentication requirements (though mcp_token is in schema), rate limits, error conditions, or return format. The annotations provide safety context but the description could better address the destructive nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 8 parameters. The description mentions pagination (implied by page_num/page_size) and chain filtering (implied by src_chain/dst_chain), but adds no additional semantic context beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('swap/bridge order history') with specific scope ('with pagination; includes single-chain and cross-chain'). It distinguishes from sibling 'tx_list' by specifying it's for transfer history, not swap/bridge orders.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when to use this tool ('List swap/bridge order history') and when to use an alternative ('→ tx_list for transfer history'), giving clear guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_swap_prepareA
Destructive
Inspect

Stage a swap session (server-side build only); returns swap_session_id. Does not sign or submit. Call after quote confirmed by user.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesHuman-readable amount (e.g. '0.01')
slippageYesSlippage as decimal (0.01=1%, 0.1=10%)
token_inYesSource token address. Use '-' for native coin
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
native_inYes⚠️ CRITICAL — Must be set correctly, wrong value causes transaction failure or fund loss. Simply check: is token_in a native/gas coin? If yes, set 1; if it's a contract token, set 0. Native coins: ETH, BNB, MATIC, AVAX, SOL — these are the chain's gas coins, native_in=1. Contract tokens: USDT, USDC, WETH, WBNB, DAI, etc. — these are ERC20/BEP20/SPL tokens, native_in=0. NOTE: BNB = native coin (native=1), WBNB = wrapped contract token (native=0). User saying 'BNB' means the native coin unless they explicitly say 'WBNB'. Same for ETH vs WETH, SOL vs WSOL. When native_in=1, token_in is auto-normalized to '-'.
to_walletNoDestination chain receiving address. ONLY required for CROSS-CHAIN swaps where chain_id_in and chain_id_out belong to different address groups (e.g. EVM→Solana). Obtain from dex_wallet_get_addresses. Omit for single-chain swaps — defaults to user_wallet.
token_outYesDestination token address
account_idYesUser account ID (UUID)
native_outYes⚠️ CRITICAL — Must be set correctly, wrong value causes transaction failure or fund loss. Simply check: is token_out a native/gas coin? If yes, set 1; if it's a contract token, set 0. Native coins: ETH, BNB, MATIC, AVAX, SOL — these are the chain's gas coins, native_out=1. Contract tokens: USDT, USDC, WETH, WBNB, DAI, etc. — these are ERC20/BEP20/SPL tokens, native_out=0. NOTE: BNB = native coin (native=1), WBNB = wrapped contract token (native=0). User saying 'BNB' means the native coin unless they explicitly say 'WBNB'. Same for ETH vs WETH, SOL vs WSOL. When native_out=1, token_out is auto-normalized to '-'. CROSS-CHAIN EXAMPLE: USDT(BSC) → SOL(Solana): native_in=0 (USDT is contract token), native_out=1 (SOL is native coin). CROSS-CHAIN EXAMPLE: ETH(Ethereum) → USDT(BSC): native_in=1 (ETH is native coin), native_out=0 (USDT is contract token).
chain_id_inYesSource chain ID (e.g. 1=ETH, 56=BSC, 501=Solana)
user_walletYesUser's wallet address matching the SOURCE CHAIN. MUST be chain-specific: call dex_wallet_get_addresses(account_id) to get addresses, use addresses["EVM"] for EVM chains or addresses["SOL"] for Solana (chain_id=501). NEVER use an EVM address (0x...) for Solana or a Solana address for EVM chains. For cross-chain swaps this is the FROM (source) chain address.
chain_id_outYesDestination chain ID. Same as chain_id_in for single-chain swap
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it clarifies that this is a staging/preparation step without signing or submission, which complements the destructiveHint=true annotation by explaining the limited scope of the operation. However, it doesn't detail potential side effects like rate limits or auth requirements beyond what's in the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, using only two sentences that directly state the tool's purpose, output, and usage context without any wasted words. Every sentence earns its place by providing critical operational guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (12 parameters, destructive operation) and lack of output schema, the description is complete enough for an agent to understand the tool's role in the swap workflow. It covers the key behavioral aspects (staging, no sign/submit) and sequencing, though it could benefit from mentioning error handling or response format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description adds no additional parameter semantics, but the baseline is 3 as the schema provides comprehensive details, making the description's lack of param info acceptable.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Stage a swap session'), resource ('server-side build only'), and output ('returns swap_session_id'), while distinguishing it from sibling tools like dex_tx_swap_sign_swap and dex_tx_swap_submit by explicitly noting it 'Does not sign or submit' and should be 'Call after quote confirmed by user'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use this tool ('Call after quote confirmed by user') and when not to use it ('Does not sign or submit'), with clear alternatives implied (e.g., use dex_tx_swap_quote before, and dex_tx_swap_sign_swap or dex_tx_swap_submit after). This helps the agent sequence operations correctly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_swap_quoteA
Destructive
Inspect

Returns swap route and pricing. Call before swap_prepare; show result to user before proceeding. → transfer_preview for direct token send (not exchange).

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesHuman-readable amount of source token to swap, e.g. "0.001" for 0.001 BNB, "1" for 1 USDT. Do NOT convert to smallest unit (wei/lamports). Obtained directly from user's chat message.
slippageYesSlippage tolerance as decimal ratio: 0.01 = 1%, 0.03 = 3%. Valid range: 0.001 (0.1%) to 0.499 (49.9%). If the user already specified slippage, use it directly. Only ask to confirm when not provided. Default: 0.03 (3%).
token_inYesSource token contract address. Use "-" for native coins (ETH, BNB, MATIC, etc.). For ERC20/BEP20 tokens, provide the full contract address (e.g. 0x55d398326f99059ff775485246999027b3197955 for BSC USDT). Obtain from user input, wallet token list (wallet_get_token_list), or known token mappings. If native_in=1, this field will be auto-set to "-".
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
native_inYes⚠️ CRITICAL — Must be set correctly, wrong value causes transaction failure or fund loss. Simply check: is token_in a native/gas coin? If yes, set 1; if it's a contract token, set 0. Native coins: ETH, BNB, MATIC, AVAX, SOL — these are the chain's gas coins, native_in=1. Contract tokens: USDT, USDC, WETH, WBNB, DAI, etc. — these are ERC20/BEP20/SPL tokens, native_in=0. NOTE: BNB = native coin (native=1), WBNB = wrapped contract token (native=0). User saying 'BNB' means the native coin unless they explicitly say 'WBNB'. Same for ETH vs WETH, SOL vs WSOL. When native_in=1, token_in is auto-normalized to '-'.
to_walletNoDestination chain receiving address. ONLY required for CROSS-CHAIN swaps where chain_id_in and chain_id_out belong to different address groups (e.g. EVM→Solana or Solana→EVM). Obtain from dex_wallet_get_addresses: use addresses["SOL"] if dest is Solana, addresses["EVM"] if dest is EVM. Omit for single-chain swaps — defaults to user_wallet.
token_outYesDestination token contract address. Use "-" for native coins. For ERC20/BEP20 tokens, provide the full contract address. Obtain from user input, wallet_get_token_list, or known token mappings. If native_out=1, this field will be auto-set to "-". For cross-chain swaps, use the token's contract address on the destination chain.
native_outYes⚠️ CRITICAL — Must be set correctly, wrong value causes transaction failure or fund loss. Simply check: is token_out a native/gas coin? If yes, set 1; if it's a contract token, set 0. Native coins: ETH, BNB, MATIC, AVAX, SOL — these are the chain's gas coins, native_out=1. Contract tokens: USDT, USDC, WETH, WBNB, DAI, etc. — these are ERC20/BEP20/SPL tokens, native_out=0. NOTE: BNB = native coin (native=1), WBNB = wrapped contract token (native=0). User saying 'BNB' means the native coin unless they explicitly say 'WBNB'. Same for ETH vs WETH, SOL vs WSOL. When native_out=1, token_out is auto-normalized to '-'. CROSS-CHAIN EXAMPLE: USDT(BSC) → SOL(Solana): native_in=0 (USDT is contract token), native_out=1 (SOL is native coin). CROSS-CHAIN EXAMPLE: ETH(Ethereum) → USDT(BSC): native_in=1 (ETH is native coin), native_out=0 (USDT is contract token).
chain_id_inYesSource chain ID. Use well-known IDs: ETH=1, BSC=56, Polygon=137, Arbitrum=42161, Base=8453, Avalanche=43114, Solana=501. Only call dex_chain_config for uncommon/unknown chains. For single-chain swap, must equal chain_id_out.
user_walletYesUser's wallet address matching the SOURCE CHAIN. CRITICAL: You MUST use the chain-specific address — call dex_wallet_get_addresses(account_id) first, then use addresses["EVM"] for EVM chains (ETH/BSC/Polygon/Arbitrum/Base/Avalanche) or addresses["SOL"] for Solana (chain_id=501). NEVER pass an EVM address (0x...) for Solana transactions or vice versa. For cross-chain swaps this is the FROM (source) chain address.
chain_id_outYesDestination chain ID. Use same well-known IDs as chain_id_in. For single-chain swap, set equal to chain_id_in. For cross-chain swap, set to the target chain's ID.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations. While annotations indicate destructiveHint=true (implying fund risk), the description emphasizes 'show result to user before proceeding' - a critical safety guideline. It also mentions the tool is a prerequisite for swap_prepare, which is operational context not captured in annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured in just two sentences. The first sentence states the core purpose, the second provides critical usage guidance and alternative. Every word earns its place with zero waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 11-parameter tool with destructiveHint=true and no output schema, the description provides essential context about its role in the swap workflow and safety considerations. However, it doesn't explain what 'swap route and pricing' output looks like or mention the required mcp_token parameter. Given the annotations cover safety aspects, this is reasonably complete but could benefit from output format guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already provides comprehensive parameter documentation. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the heavy lifting for parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns swap route and pricing' (specific verb+resource). It distinguishes from sibling tools by explicitly mentioning 'swap_prepare' as the next step and 'transfer_preview' as an alternative for direct token sends, not exchanges.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Call before swap_prepare; show result to user before proceeding' (when to use) and '→ transfer_preview for direct token send (not exchange)' (alternative tool). This gives clear context for when to use this tool versus other options.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_swap_sign_approveA
Destructive
Inspect

[write] Sign the ERC20 approve tx for a staged EVM swap. Requires need_approved=true in prepare result.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
checkin_tokenYesApprove-stage checkin_token returned by the local skill check-in script after dex_tx_swap_checkin_preview(stage='approve')
swap_session_idYesSwap session ID returned by dex_tx_swap_prepare
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a destructive, non-idempotent write operation. The description adds context by specifying it's for signing an approval transaction, which aligns with the destructive hint (approvals can change token allowances). However, it doesn't mention rate limits, auth requirements beyond the schema, or side effects like gas costs, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Sign the ERC20 approve tx') and includes essential context. There is no wasted verbiage, and every word contributes to understanding the tool's purpose and usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (destructive, multi-step process) and lack of output schema, the description is mostly complete. It covers the purpose, prerequisites, and context, but could benefit from mentioning expected outcomes or error conditions. Annotations provide safety and idempotency info, compensating for some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with detailed parameter documentation. The description adds no additional parameter semantics beyond what's in the schema, such as explaining how 'swap_session_id' relates to 'need_approved=true'. Since the schema is comprehensive, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the action ('Sign the ERC20 approve tx') and the target resource ('for a staged EVM swap'), making the purpose specific and clear. It distinguishes from siblings like 'dex_tx_swap_sign_swap' by focusing on the approval step rather than the swap itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage conditions: it must be used when 'need_approved=true in prepare result' and references a prerequisite step ('staged EVM swap'). It also implies an alternative flow where approval might not be needed, guiding the agent on when to invoke this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_swap_sign_swapA
Destructive
Inspect

[write] Sign the main swap tx for a staged session. EVM: call sign_approve first if need_approved=true. Solana: generates a fresh signable payload.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
checkin_tokenYesSwap-stage checkin_token returned by the local skill check-in script after dex_tx_swap_checkin_preview(stage='swap')
swap_session_idYesSwap session ID returned by dex_tx_swap_prepare
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructiveHint=true and readOnlyHint=false, but the description adds valuable context: it specifies blockchain-specific behaviors (EVM vs Solana workflows) and prerequisite steps, which are not covered by annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded with the main action, followed by blockchain-specific notes. Both sentences are necessary for clarity, though the structure could be slightly improved by explicitly stating the tool's role in the swap workflow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no output schema, the description provides adequate context on purpose and blockchain-specific behaviors, but it lacks details on error conditions, response format, or integration with other swap tools like dex_tx_swap_submit, leaving some gaps for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description doesn't add any parameter-specific details beyond what the schema provides, such as explaining relationships between swap_session_id and checkin_token, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Sign the main swap tx') and resource ('for a staged session'), distinguishing it from other swap-related tools like dex_tx_swap_sign_approve and dex_tx_swap_submit. However, it doesn't explicitly differentiate from all siblings beyond the blockchain-specific notes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'staged session' and blockchain-specific prerequisites (EVM: call sign_approve first if need_approved=true; Solana: generates a fresh signable payload), but it doesn't explicitly state when to use this tool versus alternatives like dex_tx_swap_submit or provide clear exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_swap_submitA
Destructive
Inspect

[write] Broadcast a staged swap to the chain. Does not sign. Requires all signing steps complete (sign_approve if EVM+approval, then sign_swap).

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
swap_session_idYesSwap session ID returned by dex_tx_swap_prepare
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it clarifies that the tool 'Does not sign' and requires prior signing steps, which annotations don't cover. Annotations indicate destructiveHint=true and readOnlyHint=false, aligning with 'Broadcast' as a write operation. No contradiction exists, but more details on rate limits or error handling could enhance it.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core action in the first sentence, followed by critical constraints. Every sentence earns its place by clarifying scope and prerequisites without redundancy. It's efficiently structured in two concise sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (destructive broadcast operation) and lack of output schema, the description is mostly complete: it explains the action, prerequisites, and what it doesn't do. However, it could benefit from mentioning the response format or potential errors, though annotations cover safety aspects adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (mcp_token and swap_session_id). The description does not add any parameter-specific details beyond what the schema provides, such as format examples or dependencies. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Broadcast a staged swap to the chain') and the resource ('swap'), distinguishing it from siblings like dex_tx_swap_prepare (which prepares) or dex_tx_swap_sign_swap (which signs). It explicitly mentions what it does not do ('Does not sign'), which helps differentiate its role in the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: after all signing steps are complete (sign_approve if EVM+approval, then sign_swap). It also implicitly suggests alternatives by referencing prerequisite tools, helping the agent understand the sequence in the swap process.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_transfer_previewA
Destructive
Inspect

Build unsigned transfer tx; displays summary (sender, recipient, amount, gas). Returns unsigned_tx_hex and txBundle for check-in. Does not broadcast. Wait for user confirmation before signing. → swap_quote/swap_prepare for token exchange.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesRecipient address
fromYesSender (your wallet) address
chainNoChain name or alias (default ETH). Resolved to canonical name from chain config; downstream sign/broadcast should use the same logical chain.
nonceNoTransaction nonce. Omit to use current chain nonce (eth_getTransactionCount), ensuring tx can be mined and found on explorer; wrong nonce causes tx to fail or not appear.
tokenNoToken symbol for display (default USDT). Use ETH/NATIVE for native; USDT for USDT; or use token_contract/token_mint for any token.
amountYesAmount in token units. Precision depends on token/decimals. Examples: '1', '0.5', '0.000001'
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
token_mintNoSolana: SPL token mint address for SPL transfer. When set, token_decimals is required. EVM: unused.
token_contractNoEVM: ERC20 contract address for arbitrary token. When set, token_decimals is used (default 18). Solana: unused.
token_decimalsNoToken decimals (e.g. 6, 18). Required when using token_contract (EVM) or token_mint (Solana); default 18 for ERC20, 9 for SOL.
max_fee_per_gasNoEVM only: override max fee per gas (wei). Use when replacing a pending tx: set ≥1.1× the previous max_fee_per_gas to fix replacement transaction underpriced.
max_priority_fee_per_gasNoEVM only: override max priority fee per gas (wei). When replacing, set ≥1.1× previous max_priority_fee_per_gas.
priority_fee_micro_lamportsNoSolana only: optional priority fee (micro-lamports per compute unit). Use 1000-10000 if broadcast returns replacement transaction underpriced.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it clarifies that the tool only builds/previews transactions without broadcasting, requires user confirmation before signing, and returns specific outputs (unsigned_tx_hex and txBundle). While annotations indicate destructiveHint=true and readOnlyHint=false (consistent with transaction building), the description provides practical workflow guidance that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted words. Each sentence serves a distinct purpose: stating the core function, specifying outputs, clarifying what it doesn't do, providing workflow guidance, and directing to alternatives. The information is front-loaded with the most critical details first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex transaction-building tool with 13 parameters and destructiveHint=true, the description provides good contextual completeness. It explains the tool's role in the workflow (preview before signing), clarifies it doesn't broadcast, and directs to alternatives. The main gap is lack of output schema, but the description partially compensates by mentioning return values (unsigned_tx_hex and txBundle).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already thoroughly documents all 13 parameters. The description doesn't add parameter-specific information beyond what's in the schema, so it meets the baseline expectation. It does mention the mcp_token requirement in the schema description, but this is already covered in the structured field.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Build unsigned transfer tx'), resource (transfer transaction), and scope ('displays summary... Returns unsigned_tx_hex and txBundle for check-in. Does not broadcast'). It distinguishes from sibling tools by explicitly mentioning '→ swap_quote/swap_prepare for token exchange' to differentiate from token exchange operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Wait for user confirmation before signing') and when not to use it ('Does not broadcast'). It also names alternative tools for different use cases ('→ swap_quote/swap_prepare for token exchange'), giving clear direction for when to choose alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_x402_checkin_previewA
Destructive
Inspect

Prepare Gate Verify tx_checkin for x402 (EVM exact / EIP-3009 only): returns tx_checkin.message (64-hex digest) and x402_payment_required_b64 without signing. Flow: 1) this tool 2) Gate Verify tx_checkin with returned tx_checkin fields 3) dex_tx_x402_fetch with same url/body/headers, checkin_token, and the same x402_payment_required_b64. If x402_payment_required_b64 is omitted, performs one HTTP call and requires HTTP 402.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesSame URL as dex_tx_x402_fetch
bodyNoOptional request body
methodNoHTTP method; default GET
headersNoOptional JSON object of request headers
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
x402_preferred_networkNoSame as dex_tx_x402_fetch
x402_payment_required_b64NoOptional: PAYMENT-REQUIRED header base64; when omitted, one outbound request is made and must return 402
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true, readOnlyHint=false, openWorldHint=true, and idempotentHint=false, which the description complements by explaining the multi-step flow and conditional behavior (omitting x402_payment_required_b64 triggers an HTTP call with 402 requirement). It adds context about the tool's role in a sequence and the consequences of parameter omission, though it doesn't detail rate limits or specific error handling beyond HTTP 402.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and returns, followed by a structured flow and conditional note. It avoids redundancy but could be slightly more concise by integrating the conditional behavior into the flow description. Every sentence adds value, such as distinguishing from signing and outlining the multi-step process.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 parameters, destructive operation, no output schema), the description is reasonably complete. It explains the tool's role in a multi-step process, return values, and conditional behavior, though it doesn't detail error responses or output format beyond the 64-hex digest. Annotations cover safety aspects, but more on behavioral outcomes could enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 7 parameters thoroughly. The description adds minimal parameter semantics, mainly noting that url, body, headers, and x402_preferred_network should match dex_tx_x402_fetch, and explaining the effect of omitting x402_payment_required_b64. This meets the baseline for high schema coverage without significantly enhancing parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('prepare', 'returns') and resources ('Gate Verify tx_checkin for x402'), distinguishing it from sibling tools like dex_tx_x402_fetch by focusing on preparation rather than execution. It explicitly mentions the return values (tx_checkin.message and x402_payment_required_b64) and constraints (EVM exact / EIP-3009 only, no signing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines with a numbered flow (1) this tool, 2) Gate Verify tx_checkin, 3) dex_tx_x402_fetch), including when to use alternatives (if x402_payment_required_b64 is omitted, it performs one HTTP call requiring HTTP 402). It clearly references sibling tools (dex_tx_x402_fetch) and specifies prerequisites like using the same url/body/headers and checkin_token.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_tx_x402_fetchA
Destructive
Inspect

[write] Send an HTTP request to the given URL. If the server responds with 402 Payment Required, pay via x402 (Coinbase-style: USDC EIP-3009 / Permit2 on EVM, SPL on Solana), retry with PAYMENT-SIGNATURE. REQUIRED: Gate Verify tx_checkin before signing; pass checkin_token. For EVM EIP-3009, use dex_tx_x402_checkin_preview first so the agent does not build digest manually; then pass checkin_token and the same x402_payment_required_b64 from preview. CoinGecko Pro x402: do not send x-cg-pro-api-key; default network Base unless x402_preferred_network=solana.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesRequest URL (CoinGecko x402: https://pro-api.coingecko.com/api/v3/x402/...)
bodyNoOptional request body
methodNoHTTP method; default GET
headersNoOptional JSON object of request headers
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
checkin_tokenYesFrom Gate Verify MCP tool tx_checkin success (e.g. structuredContent.data.checkin_token or data.checkin_token). Required before every call so GV can verify signing.
x402_preferred_networkNoOptional: base, eth, solana, eip155:8453, ... — restrict which accept.network is used. CoinGecko Pro /x402/ defaults to base when omitted.
x402_payment_required_b64NoOptional: base64 PAYMENT-REQUIRED header payload from the same 402 as EIP-3009 tx-checkin. When set, skips the first HTTP call and pays using this snapshot (digest/checkin_token must match).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations. While annotations indicate destructive (true), non-idempotent, and open-world operations, the description elaborates on specific behaviors: automatic retry with PAYMENT-SIGNATURE on 402, payment mechanisms (USDC EIP-3009/Permit2 on EVM, SPL on Solana), network defaults (Base unless specified), and authentication requirements (mcp_token needed to avoid 403). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense but somewhat verbose and could be more structured. It front-loads the core purpose but includes multiple conditional clauses and implementation details that might overwhelm. Every sentence adds value, but the presentation could be more streamlined for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (payment handling, authentication, multiple parameters) and lack of output schema, the description does a good job covering key aspects: purpose, prerequisites, payment flow, network defaults, and authentication. It references related tools for context. However, it doesn't describe the return format or error handling, leaving some gaps for a tool with destructive operations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 8 parameters thoroughly. The description adds some semantic context by explaining relationships between parameters (e.g., checkin_token from tx_checkin, x402_payment_required_b64 from preview) and usage notes (e.g., default network Base). However, it doesn't provide significant additional meaning beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Send an HTTP request to the given URL' with specific handling for 402 Payment Required responses via x402 payment. It distinguishes from siblings by mentioning specific related tools (tx_checkin, dex_tx_x402_checkin_preview) and explicitly differentiates from other tools in the server that handle different operations like authentication, market data, or swaps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: for HTTP requests that may require x402 payments, with prerequisites (Gate Verify tx_checkin, checkin_token). It specifies alternatives (use dex_tx_x402_checkin_preview first for EVM EIP-3009) and exclusions (do not send x-cg-pro-api-key for CoinGecko Pro). It clearly distinguishes from sibling tools that handle different tasks like authentication or swaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_bind_exchange_uidA
Destructive
Inspect

Bind Gate exchange UID to session. Standalone Gate session only; gateUid must equal login_result.gate_uid from Gate OAuth. → replace_binding to rebind.

ParametersJSON Schema
NameRequiredDescriptionDefault
gateUidYesMust equal login_result.gate_uid from Gate OAuth that created this session
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true, readOnlyHint=false, openWorldHint=true, and idempotentHint=false. The description adds context by specifying it's for 'Standalone Gate session only' and mentions rebinding, which aligns with the destructive nature. However, it doesn't elaborate on potential side effects, error conditions, or session implications beyond what annotations imply, missing details like whether binding is reversible or affects other operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core purpose in the first phrase. It includes essential usage notes in a single sentence, with no redundant information. However, the arrow notation '→ replace_binding to rebind' is slightly informal and could be integrated more smoothly, slightly affecting structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (binding operation with destructive hint and no output schema), the description is moderately complete. It covers the main purpose and constraints but lacks details on return values, error handling, or how binding integrates with other wallet tools. With annotations providing safety cues, it's adequate but could benefit from more behavioral context for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for both parameters (gateUid and mcp_token). The description adds minimal value beyond the schema, only reiterating that gateUid 'must equal login_result.gate_uid from Gate OAuth,' which is already in the schema description. No additional syntax or format details are provided, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Bind Gate exchange UID to session.' It specifies the resource (Gate exchange UID) and action (bind to session), and distinguishes it from sibling tools like dex_wallet_replace_binding by mentioning rebinding as a separate operation. However, it doesn't explicitly differentiate from all siblings, such as dex_wallet_google_gate_bind_start, which may have overlapping functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'Standalone Gate session only; gateUid must equal login_result.gate_uid from Gate OAuth.' It also hints at an alternative for rebinding with '→ replace_binding to rebind.' However, it lacks explicit exclusions or comparisons with other binding-related tools like dex_wallet_google_gate_bind_start, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_get_addressesA
Destructive
Inspect

Returns address map by chain type (EVM, SOL, etc.). Call before transfer/swap tools to get the sender address.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
account_idYesAccount ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations by specifying the tool's role in workflows ('Call before transfer/swap tools'). While annotations provide structural hints (readOnlyHint=false, destructiveHint=true, etc.), the description clarifies practical usage without contradicting annotations. It doesn't fully explain the destructive nature hinted at by annotations, but adds operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence states the core functionality, and the second provides critical usage guidance. No wasted words, and the information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, 100% schema coverage, and comprehensive annotations, the description provides sufficient context. It explains the tool's purpose and when to use it, though it doesn't detail the return format (no output schema exists) or fully address the destructiveHint annotation. For a tool with good structured data, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description doesn't add parameter-specific information beyond what's already documented in the schema (mcp_token requirements, account_id purpose). It provides workflow context but no additional parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('address map by chain type'), and distinguishes it from sibling tools by specifying it should be called before transfer/swap tools to get sender addresses. This provides clear differentiation from other wallet-related tools like dex_wallet_get_token_list or dex_wallet_get_total_asset.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance by stating 'Call before transfer/swap tools to get the sender address.' This gives clear context about when to use this tool versus alternatives, and the input schema further reinforces this by requiring account_id and noting the mcp_token is needed after login.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_get_bindingsA
Destructive
Inspect

Returns current Gate UID binding for the session wallet.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description doesn't contradict these, and it adds context by specifying 'for the session wallet,' which clarifies scope. However, it lacks details on potential side effects (e.g., what 'destructive' entails) or rate limits, missing some behavioral nuance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the core purpose without unnecessary words. It's front-loaded and wastes no space, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a wallet binding tool with destructive hints), no output schema, and rich annotations, the description is minimally adequate. It states what the tool does but lacks details on return values, error conditions, or how it interacts with session state, leaving gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'mcp_token' fully documented in the schema. The description adds no additional parameter information beyond what's in the schema, so it meets the baseline score of 3 for high coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Returns') and resource ('current Gate UID binding for the session wallet'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'dex_wallet_get_addresses' or 'dex_wallet_get_total_asset', which also retrieve wallet-related information, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites (e.g., needing a logged-in session) or compare it to similar tools like 'dex_wallet_bind_exchange_uid' or 'dex_wallet_replace_binding', leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_get_token_listA
Destructive
Inspect

Token balances and prices with pagination; filter by network_keys. → get_total_asset for portfolio summary only.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number, default 1
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
page_sizeNoPage size, default 20
account_idNoAccount ID (optional, auto-detected from login session)
network_keysNoOptional. Comma-separated network keys to filter (e.g. ETH,SOL,ARB). Omit or leave empty for all networks. Do not pass a separate chain field — use network_keys only.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description mentions 'pagination' and 'filter by network_keys', which adds useful behavioral context beyond what annotations provide. However, with annotations declaring destructiveHint=true and readOnlyHint=false, the description doesn't explain what gets destroyed or modified, nor does it address authentication requirements (though the schema covers mcp_token). The description adds some value but doesn't fully explain the destructive nature implied by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two clauses separated by a semicolon, both containing essential information. Every word earns its place: the first clause states the core functionality, the second provides crucial sibling differentiation. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, 100% schema coverage, and no output schema, the description provides good context by explaining the core functionality and sibling differentiation. However, it doesn't describe the return format or what 'token balances and prices' actually includes, which would be helpful given the lack of output schema. The destructiveHint=true annotation also warrants more explanation in the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 5 parameters thoroughly. The description mentions 'filter by network_keys' which aligns with the schema, but doesn't add significant semantic value beyond what's already in the parameter descriptions. The baseline of 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get token balances and prices') and resources ('token list'), and explicitly distinguishes it from its sibling tool 'get_total_asset' by specifying that sibling is for 'portfolio summary only'. This provides excellent differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives by stating '→ get_total_asset for portfolio summary only', clearly indicating the alternative tool for a different use case. This gives the agent clear direction on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_get_total_assetA
Destructive
Inspect

Total portfolio value and 24h change. → get_token_list for per-token balance detail.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
account_idNoAccount ID (optional, auto-detected from login session)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description doesn't contradict these annotations but adds minimal behavioral context beyond them. It doesn't explain what 'destructive' means in this context or address authentication requirements that are documented in the parameter schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two sentences that each serve a clear purpose: the first states the tool's function, the second provides sibling differentiation. There's zero wasted language and it's perfectly front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema but good annotations and 100% schema coverage, the description provides adequate context about what the tool returns (total portfolio value and 24h change) and when to use it versus alternatives. The main gap is not addressing the authentication requirement mentioned in the parameter schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with both parameters well-documented in the input schema. The description adds no additional parameter information beyond what's already in the schema, so it meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get total portfolio value and 24h change') and distinguishes it from a sibling tool ('get_token_list for per-token balance detail'). It explicitly identifies the resource as portfolio assets and differentiates from the detailed token-level sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives by stating 'get_token_list for per-token balance detail.' This clearly indicates this tool is for aggregate portfolio summary while the sibling is for detailed token-level information, giving perfect sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_get_wallet_typeA
Destructive
Inspect

Returns walletAddress, walletType, gateUid. With for_withdraw_to_exchange or for_gate_oauth_bind, also starts Gate OAuth when gateUid is missing.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
account_idNoAccount ID; used with session to resolve EVM address if wallet_address omitted
wallet_addressNoEVM wallet address to classify
for_gate_oauth_bindNoBind/rebind Gate exchange: when walletType is NOT gate_mcp/gate_quick, start Gate OAuth (Google: link); when already Gate-native with gateUid, returns nextSteps only
for_withdraw_to_exchangeNoWithdraw-to-exchange: when gateUid is missing, start Gate device OAuth (Google: link binding)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false, destructiveHint=true, openWorldHint=true, and idempotentHint=false. The description adds value by explaining the OAuth initiation behavior, which isn't covered by annotations. However, it doesn't fully disclose implications of destructiveHint=true (e.g., what gets modified or risks involved), leaving some behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences that efficiently convey the core functionality and conditional behavior. It's front-loaded with the primary purpose and avoids unnecessary details, though it could be slightly more structured for clarity, such as separating the OAuth conditions more explicitly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters, no output schema, and annotations with mixed hints (e.g., destructiveHint=true), the description is somewhat complete but lacks details on return values, error handling, or the full impact of destructive operations. It covers the basic purpose and OAuth triggers but doesn't fully address the complexity implied by the annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal semantic context by mentioning the conditions for starting OAuth related to for_withdraw_to_exchange and for_gate_oauth_bind, but doesn't provide significant additional meaning beyond what's in the schema, justifying the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns walletAddress, walletType, and gateUid, and mentions starting Gate OAuth under specific conditions. It specifies the verb 'returns' and resource 'wallet type info', but doesn't explicitly differentiate from siblings like dex_wallet_get_addresses or dex_wallet_get_bindings, which is why it's not a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: to get wallet info and optionally start Gate OAuth when gateUid is missing for specific purposes (for_withdraw_to_exchange or for_gate_oauth_bind). However, it doesn't explicitly state when NOT to use it or name alternatives among siblings, so it falls short of a perfect 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_google_gate_bind_startA
Destructive
Inspect

Google sessions only: starts Gate OAuth to bind Gate exchange UID to this session. mcp_token stays unchanged after poll.

ParametersJSON Schema
NameRequiredDescriptionDefault
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a destructive, non-idempotent, open-world operation, which the description doesn't contradict. The description adds valuable context beyond annotations: it specifies the tool is for Google sessions only, involves OAuth flow, and that the mcp_token remains unchanged after polling. This clarifies the authentication scope and token behavior, enhancing transparency without repeating annotation details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two sentences that directly convey key information: the tool's scope and action, plus a note on token behavior. There is no wasted language, and every sentence serves a clear purpose, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving OAuth binding for Google sessions), the description covers the purpose and some behavioral aspects, but lacks details on output (no output schema) and doesn't fully address when to use versus siblings. With annotations providing safety and idempotency info, and schema covering parameters, the description is adequate but leaves gaps in usage context and result expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'mcp_token' fully documented in the schema. The description mentions 'mcp_token stays unchanged after poll', which adds a behavioral note about the parameter but doesn't provide additional semantic meaning beyond what's in the schema. This meets the baseline of 3 since the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('starts Gate OAuth to bind Gate exchange UID to this session') and specifies the scope ('Google sessions only'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'dex_wallet_bind_exchange_uid' or 'dex_wallet_replace_binding', which appear related to binding operations, leaving some ambiguity about when to choose this specific tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('Google sessions only') and mentions a prerequisite ('mcp_token stays unchanged after poll'), but it doesn't provide explicit guidance on when to use this tool versus alternatives like 'dex_wallet_bind_exchange_uid'. There's no mention of when-not-to-use scenarios or clear alternatives, leaving the agent to infer usage from context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_replace_bindingA
Destructive
Inspect

Rebind Gate exchange UID. [write] newUid must equal login_result.gate_uid from a fresh Gate OAuth. Blocked for gate_mcp/gate_quick sessions and Google sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
newUidYesMust equal login_result.gate_uid from latest Gate OAuth (stamped on session)
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies that 'newUid must equal login_result.gate_uid from a fresh Gate OAuth' and notes blocking conditions for certain session types. Annotations indicate destructiveHint=true and readOnlyHint=false, which align with the 'rebind' action, and the description complements this with operational constraints without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with key information in two sentences. The first sentence states the purpose, and the second adds critical usage constraints. There's no wasted text, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (destructive operation with specific constraints), the description is reasonably complete. It covers purpose, usage conditions, and behavioral notes. However, without an output schema, it doesn't describe return values or error cases, which is a minor gap for a tool with significant implications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing full parameter documentation. The description mentions 'newUid' must come from 'login_result.gate_uid', adding slight context about its source, but doesn't elaborate on 'mcp_token' beyond what the schema states. This meets the baseline for high schema coverage without significant added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Rebind Gate exchange UID') and specifies the resource (Gate exchange UID binding). It distinguishes from sibling tools like 'dex_wallet_bind_exchange_uid' by focusing on replacement rather than initial binding, though it doesn't explicitly name alternatives. The purpose is specific but could be more distinctively articulated relative to siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: it specifies when to use (with 'newUid' from fresh Gate OAuth) and when not to use (blocked for gate_mcp/gate_quick sessions and Google sessions). It implicitly suggests alternatives by referencing login tools for obtaining credentials, though it doesn't name specific sibling tools for those contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_sign_messageA
Destructive
Inspect

Sign a 32-byte hex message with wallet key. [write] Requires checkin_token from terminal tx-checkin. → sign_transaction for raw tx signing.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain: dex_chain_config name/alias (e.g. ETH, BSC, SOL), or pass EVM / SOL directly for the signing family. Unknown names are rejected via chain config; web3_business_wallet sign-message receives chain "EVM" or lowercase "sol" for Solana.
messageYesMessage to sign (hex)
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
checkin_tokenYesFrom successful tx-checkin response: the checkin_token string (terminal CLI stdout JSON). Must match the signing intent in the same flow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it specifies the 32-byte hex requirement for messages and the checkin_token prerequisite from a specific flow. While annotations already indicate this is destructive and non-idempotent, the description provides practical implementation details that help the agent understand the operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely efficient - just two sentences that pack essential information: the core functionality, key constraints, prerequisites, and differentiation from sibling tools. Every word serves a purpose with zero wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a signing operation with comprehensive annotations and full schema coverage, the description provides excellent context about prerequisites, constraints, and alternatives. The main gap is the lack of output schema, but the description compensates well by clearly defining the tool's role in the broader transaction flow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already thoroughly documents all parameters. The description doesn't add significant parameter semantics beyond what's in the schema, though it reinforces the checkin_token requirement. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Sign a 32-byte hex message with wallet key'), identifies the resource (message), and distinguishes it from the sibling tool 'sign_transaction' for raw tx signing. This provides excellent differentiation from related tools in the ecosystem.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('sign_transaction for raw tx signing' as an alternative) and provides crucial prerequisites ('Requires checkin_token from terminal tx-checkin'). This gives clear guidance on both when to use this tool versus alternatives and what's needed before invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_wallet_sign_transactionA
Destructive
Inspect

Sign a raw unsigned transaction. [write] Requires user confirmation after transfer preview and checkin_token from terminal tx-checkin. → dex_tx_send_raw_transaction to broadcast.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesChain name: must match dex_chain_config response `chain` (canonical). Aliases are resolved via chain config.
raw_txYesRaw transaction data
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
checkin_tokenYesFrom successful tx-checkin response: the checkin_token string (terminal CLI stdout JSON). Must match the signing intent in the same flow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a destructive, non-idempotent write operation (readOnlyHint: false, destructiveHint: true, idempotentHint: false). The description adds valuable context beyond annotations: it specifies that user confirmation is required, mentions the checkin_token requirement (which is also in the schema), and references the terminal CLI flow. However, it doesn't mention rate limits or specific error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two sentences) with zero wasted words. It's front-loaded with the core purpose, followed by prerequisites and next steps. Every sentence earns its place by providing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive transaction signing tool with no output schema, the description provides good context about prerequisites (user confirmation, checkin_token), workflow placement, and next steps. It could be more complete by mentioning what the tool returns (signed transaction data) or error cases, but given the annotations cover safety aspects and schema covers parameters well, it's mostly sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all parameters. The description mentions 'checkin_token from terminal tx-checkin' which adds some context about its source, but doesn't provide significant additional semantic meaning beyond what's in the schema descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Sign a raw unsigned transaction') and distinguishes it from sibling tools like 'dex_tx_send_raw_transaction' for broadcasting and 'dex_wallet_sign_message' for message signing. It precisely identifies the resource (transaction) and verb (sign).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Requires user confirmation after transfer preview and checkin_token from terminal tx-checkin') and provides a clear alternative ('→ dex_tx_send_raw_transaction to broadcast'). It also mentions prerequisites (checkin_token) and the next step in the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dex_withdraw_deposit_addressA
Destructive
Inspect

Fetch Gate exchange deposit address for the session-bound Gate UID. [write] amount must be ≥ min_deposit_amount; do not pass uid — it is resolved from the session.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesRequired. Human-readable amount (same units as on-chain transfer); must be ≥ exchange min_deposit_amount
mcp_tokenNoMCP session token obtained from login (dex_auth_google_login_poll or dex_auth_gate_login_poll). MUST be provided in every call to this tool after login; omitting it will result in 403
chain_nameYesChain name, e.g. BSC, ETH
coin_symbolYesCoin symbol, e.g. USDT
token_addressNoOptional. ERC20 contract for gate-deposit; empty for native. If omitted, USDT is resolved from chain_name; other symbols may require this field
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a destructive, non-idempotent write operation (readOnlyHint: false, destructiveHint: true). The description adds valuable context beyond annotations: it specifies that 'amount must be ≥ min_deposit_amount' (a constraint), mentions session-based UID resolution, and implies authentication requirements via 'session-bound'. However, it doesn't detail rate limits, error conditions, or what 'destructive' entails (e.g., if it modifies account state).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that efficiently conveys purpose, key constraints, and parameter guidance. Every clause adds value: 'Fetch... address' (action), 'for session-bound Gate UID' (context), 'amount must be ≥ min_deposit_amount' (constraint), 'do not pass uid' (parameter exclusion). No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive write tool with no output schema, the description is moderately complete. It covers the core action, authentication context (session-bound), and a key constraint. However, it lacks details on return values (address format, additional metadata), error cases, or the implications of 'destructive' behavior. Given the annotations provide safety hints but not full behavioral context, more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing detailed parameter documentation. The description adds minimal semantics: it reinforces the 'amount' constraint (≥ min_deposit_amount) and clarifies that 'uid' is resolved from session (not a parameter). It doesn't explain parameter interactions (e.g., how token_address relates to chain_name/coin_symbol) beyond what the schema states. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch Gate exchange deposit address') and the resource (deposit address for the session-bound Gate UID). It distinguishes from siblings by focusing on deposit address retrieval rather than authentication, trading, or wallet operations. However, it doesn't explicitly differentiate from all possible deposit-related tools (though none are listed in siblings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: it's for fetching deposit addresses when an amount meets a minimum threshold, and the UID is resolved from session. It mentions 'do not pass uid' as a specific exclusion. However, it doesn't explicitly state when NOT to use this tool versus alternatives (e.g., for withdrawals or other address types), and no direct alternatives are named among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.