binance-mcp
Server Details
Binance - 340 tools for market data, order books, and trading pairs
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/binance-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
340 toolsdelete_api_v3_open_ordersAInspect
Cancel all Open Orders on a Symbol (TRADE) — Cancels all active orders on a symbol. This includes OCO orders. Weight(IP): 1
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context by noting 'This includes OCO orders' and rate limiting ('Weight(IP): 1'), but fails to disclose return value structure, error behavior when no orders exist, or explicitly flag this as a destructive/mutating operation requiring authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with the action and scope in the first clause. The prerequisite is clearly separated on its own line. No redundant or filler text; every sentence delivers specific operational value (scope, OCO inclusion, weight, prerequisites).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately covers invocation requirements and scope (including OCO edge case). However, it should ideally describe the return payload (e.g., list of cancelled orders) since no output schema exists to document this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all 4 parameters (symbol, signature, timestamp, recvWindow) adequately documented. The description mentions 'Symbol' in the title but does not augment parameter semantics beyond what the schema already provides, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Cancel all Open Orders on a Symbol' with the specific verb 'Cancel', resource 'Open Orders', and scope 'on a Symbol'. It clearly distinguishes itself from sibling tool 'delete_api_v3_order' (single order cancellation) by emphasizing 'all' and mentioning OCO order inclusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite guidance: 'First call exchange info or symbol listing to discover valid trading pairs'. However, it lacks explicit contrast with alternatives like 'delete_api_v3_order' for single-order cancellation scenarios, though the functional difference is inferable from the name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_api_v3_orderAInspect
Cancel Order (TRADE) — Cancel an active order. Either orderId or origClientOrderId must be sent. Weight(IP): 1 Returns: { symbol: string, origClientOrderId: string, orderId: number, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| orderId | No | Order id | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| origClientOrderId | No | Order id from client | |
| cancelRestrictions | No | query parameter: cancelRestrictions ("ONLY_NEW" | "ONLY_PARTIALLY_FILLED") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting ('Weight(IP): 1') and return structure (symbol, orderId fields), but lacks explicit disclosure of mutation semantics, idempotency guarantees, or error behaviors (e.g., what happens if the order is already filled).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately front-loaded with the action and efficiently structured. The Returns information is slightly dense inline, and the prerequisite is clearly separated, though the formatting could be cleaner for optimal readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 8 parameters and no output schema, the description compensates by providing return fields and prerequisites. However, for a trading mutation operation, it lacks coverage of edge cases (e.g., partial fills, already-cancelled orders) and explicit safety warnings that would be expected without annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% description coverage, the description adds crucial semantic constraint information: 'Either `orderId` or `origClientOrderId` must be sent.' This XOR relationship between parameters is not captured in the schema structure and represents valuable added context for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Cancel Order (TRADE)' followed by 'Cancel an active order', providing a specific verb and resource. The requirement that 'Either `orderId` or `origClientOrderId` must be sent' clearly distinguishes this single-order cancellation from sibling bulk deletion tools like delete_api_v3_open_orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit PREREQUISITE section stating to first call exchange info to discover valid trading pairs, which provides clear workflow context. However, it does not explicitly contrast when to use this tool versus alternatives like delete_api_v3_order_list or delete_api_v3_open_orders.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_api_v3_order_listAInspect
[DISCOVERY] Cancel OCO (TRADE) — Cancel an entire Order List Canceling an individual leg will cancel the entire OCO Weight(IP): 1 Returns: { orderListId: number, contingencyType: string, listStatusType: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| orderListId | No | Order list id | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| listClientOrderId | No | A unique Id for the entire orderList |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limiting ('Weight(IP): 1'), side effects (leg cancellation triggers full cancellation), and return structure (orderListId, contingencyType fields). Missing only explicit auth prerequisites or error modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with the action, but the text is a single run-on sentence with internal metadata leakage ('[DISCOVERY]'). While information-dense, the lack of punctuation and structure hinders readability. Every sentence (fragment) earns its place, but formatting is poor.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter trading operation with no annotations and no structured output schema, the description adequately compensates by documenting the return object structure, rate limit weight, and behavioral side effects. Only minor gaps remain regarding error handling or explicit permission requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing descriptions for all 7 parameters (symbol, signature, orderListId, etc.). The description adds no additional parameter-specific semantics beyond the schema, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Cancel OCO (TRADE)' and 'Cancel an entire Order List', providing a specific verb and resource. It clearly distinguishes from sibling tool delete_api_v3_order (single order cancellation) by specifying OCO/list scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides behavioral context that 'Canceling an individual leg will cancel the entire OCO', implying the tool operates on the whole list. However, it lacks explicit guidance on when to choose this over delete_api_v3_order or delete_sapi_v1_margin_order_list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_api_v3_user_data_streamCInspect
Close a ListenKey (USER_STREAM) — Close out a user data stream. Weight: 2
| Name | Required | Description | Default |
|---|---|---|---|
| listenKey | No | User websocket listen key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only adds the rate limit 'Weight: 2' as behavioral context. It fails to disclose that this is a destructive operation (though implied by 'delete' in the name), whether the operation is idempotent, what happens to active WebSocket connections, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief (two clauses) and front-loaded with the critical action and resource. Minor redundancy exists between 'Close a ListenKey' and 'Close out a user data stream'. The 'Weight: 2' appendage is slightly disjointed but provides useful implementation context without excessive bloat.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, no nested objects, no output schema), the description provides the minimum viable context: the action, target resource, and rate limit cost. However, it lacks explanation of success indicators or side effects (e.g., 'returns empty object on success' or 'terminates all active connections using this key'), which would be expected for a complete destructive operation description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'listenKey' parameter fully documented as 'User websocket listen key' with examples. The description mentions 'ListenKey' in the text, reinforcing the parameter's purpose, but adds no additional semantic context (e.g., format constraints, how to obtain the key) beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Close') and resource ('ListenKey'/'USER_STREAM'), identifying this as a user data stream termination tool. It implicitly distinguishes from sibling margin/isolated variants (delete_sapi_v1_user_data_stream_isolated) by specifying USER_STREAM, though it could explicitly clarify this is for Spot API v3 versus other versions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the sapi_v1 variants (delete_sapi_v1_user_data_stream) or prerequisites (e.g., that a stream must first be created via post_api_v3_user_data_stream). The description only states what the tool does, not when to select it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_algo_futures_orderAInspect
Cancel Algo Order(TRADE) — Cancel an active order. - You need to enable Futures Trading Permission for the api key which requests this endpoint. - Base URL: https://api.binance.com Weight(IP): 1 Returns: { algoId: number, success: boolean, code: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| algoId | Yes | Eg. 14511 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses: rate limits ('Weight(IP): 1'), base URL, permission requirements, and return structure ('Returns: { algoId: number, success: boolean... }'). It notes the operation only works on 'active' orders.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with no wasted words. The content is front-loaded with the action statement. Minor deduction for the slightly cluttered dash-bullet formatting within a single string, though all included details (URL, weight, returns) are valuable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and structured output schema, the description compensates well by including the return object structure, rate limiting info, and API permission requirements. It adequately covers the behavioral contract for a deletion operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds no additional parameter semantics beyond the schema (e.g., it doesn't explain signature generation or that algoId refers to the order being cancelled), but meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Cancel Algo Order(TRADE)' providing a specific verb (Cancel) and resource (Algo Order). The tool name includes 'futures' and the description mentions 'Futures Trading Permission', effectively distinguishing it from the sibling 'delete_sapi_v1_algo_spot_order'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states the prerequisite: 'You need to enable Futures Trading Permission for the api key'. However, it lacks explicit guidance on when to use this versus the spot algo cancellation tool or regular order cancellation endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_algo_spot_orderAInspect
Cancel Algo Order — Cancel an open TWAP order Weight(IP): 1 Returns: { algoId: number, success: boolean, code: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| algoId | Yes | query parameter: algoId (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully documents rate limiting ('Weight(IP): 1') and discloses the return structure ('Returns: { algoId: number, success: boolean, code: number, ... }') which compensates for the lack of output schema. However, it omits details on error conditions or side effects (e.g., handling of partially filled orders).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the action ('Cancel Algo Order'), packing rate limit and return value information into a single sentence efficiently. While dense, every clause provides distinct value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately compensates by documenting return values and rate limits. It correctly identifies the specific order type (TWAP) being operated on. It could be improved by mentioning that the algoId must be sourced from open orders queries, but is otherwise complete for a cancellation endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured schema already defines all 4 parameters (algoId, timestamp, signature, recvWindow) adequately. The description does not add additional semantic context for parameters (e.g., explaining that algoId comes from open orders), meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Cancel Algo Order' and specifies 'Cancel an open TWAP order,' providing a specific verb (Cancel), resource (TWAP order), and scope. It distinguishes from the sibling delete_sapi_v1_algo_futures_order through the 'spot' path segment and from regular order cancellation tools (delete_api_v3_order) by specifying 'TWAP' and 'Algo.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Cancel an open TWAP order' implies usage context (when you have an open TWAP algo order to cancel), but lacks explicit guidance on prerequisites (e.g., obtaining algoId from get_sapi_v1_algo_spot_open_orders) or explicit differentiation from delete_sapi_v1_algo_futures_order beyond the path naming.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_margin_isolated_accountAInspect
Disable Isolated Margin Account (TRADE) — Disable isolated margin account for a specific symbol. Each trading pair can only be deactivated once every 24 hours . Weight(UID): 300 Returns: { success: boolean, symbol: string }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses rate limit 'Weight(UID): 300', return format '{ success: boolean, symbol: string }', and business constraint 'once every 24 hours'. Minor gap: does not explain consequences of disabling (e.g., impact on open positions or ability to re-enable).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent information density with zero waste. Structure flows logically: action/title → core description → rate limit → return type → prerequisite. Every sentence provides actionable information (TRADE classification, weight, return shape, prerequisite workflow).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the medium complexity (margin operations require understanding valid pairs), the description is nearly complete. It compensates for missing output_schema by documenting the return format inline. Includes rate limiting and prerequisites. Would achieve 5 with explicit mention of whether open positions must be closed before disabling or re-enablement capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all 4 parameters have descriptions with examples where appropriate). The description mentions 'for a specific symbol' which aligns with the symbol parameter, but adds no additional semantic detail beyond the comprehensive schema. Baseline 3 is appropriate when schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with clear verb 'Disable', resource 'Isolated Margin Account', and scope 'for a specific symbol'. The (TRADE) tag and 'Disable' action clearly distinguish this from siblings like post_sapi_v1_margin_isolated_account (enable/create) and get_sapi_v1_margin_isolated_account (query).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Strong prerequisite guidance explicitly stating to 'First call exchange info or symbol listing to discover valid trading pairs'. Includes important constraint 'deactivated once every 24 hours'. Lacks explicit naming of alternative tools (e.g., the POST version to enable), but the action verb provides clear implied contrast.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_margin_open_ordersAInspect
Margin Account Cancel all Open Orders on a Symbol (TRADE) — - Cancels all active orders on a symbol for margin account. - This includes OCO orders. Weight(IP): 1
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden. It discloses rate limiting ('Weight(IP): 1') and scope details ('includes OCO orders'), but lacks critical safety information for a destructive operation: no mention of irreversibility, what happens when no orders exist, or the response format/confirmation structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately dense and front-loaded with the operation type and scope. The prerequisite is clearly separated. Minor formatting irregularities (em-dash followed by hyphens) slightly detract from optimal structure, but every sentence provides distinct value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive financial operation with 5 parameters and no output schema or annotations, the description covers the core operation, prerequisites, and rate limits, but leaves significant gaps: no description of success/failure responses, no behavioral details for edge cases (invalid symbol, empty order book), and no authentication context despite the signature requirement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds context that this operates on a 'symbol' for a 'margin account,' which supports understanding the isIsolated parameter, but does not elaborate on parameter syntax, signature generation, or timestamp formatting beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Cancels all active orders on a symbol for margin account,' specifying the verb (cancel), resource (all active/OCO orders), and scope (margin account, specific symbol). It distinguishes from siblings like delete_api_v3_open_orders (spot) and delete_sapi_v1_margin_order (single order) by emphasizing 'Margin Account' and 'all.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite: 'First call exchange info or symbol listing to discover valid trading pairs, then query market data.' This gives clear sequencing guidance. However, it lacks explicit guidance on when to prefer this over single-order cancellation (delete_sapi_v1_margin_order) or spot cancellation (delete_api_v3_open_orders), though the scope is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_margin_orderAInspect
Margin Account Cancel Order (TRADE) — Cancel an active order for margin account. Either orderId or origClientOrderId must be sent. Weight(IP): 10 Returns: { symbol: string, orderId: number, origClientOrderId: string, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| orderId | No | Order id | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| origClientOrderId | No | Order id from client |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit 'Weight(IP): 10' and provides a partial output structure 'Returns: { symbol: string, ... }' despite no formal output schema existing. It notes the '(TRADE)' permission level. It could improve by mentioning error conditions or whether cancellation is immediate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear header, constraint statement, weight/return information, and a dedicated PREREQUISITE section. Information density is high with minimal waste. The return type clause is slightly compressed but still readable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter mutation tool with no annotations and no output schema, the description provides adequate context: it specifies the return structure, rate limits, prerequisites, and scope (margin vs spot). It appropriately compensates for the missing structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% description coverage (baseline 3), the description adds crucial semantic information: the XOR constraint that 'Either orderId or origClientOrderId must be sent.' This is vital because neither parameter is marked as required in the schema individually, yet one is mandatory for the operation to succeed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Cancel an active order for margin account,' providing a specific verb and resource. It clearly distinguishes this from sibling spot order tools (e.g., delete_api_v3_order) by explicitly mentioning 'margin account' and from bulk operations like delete_sapi_v1_margin_open_orders by specifying 'active order' (singular).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The PREREQUISITE section explicitly states to 'First call exchange info or symbol listing to discover valid trading pairs,' providing clear prerequisite guidance. It also notes the constraint that 'Either orderId or origClientOrderId must be sent.' However, it does not explicitly name sibling alternatives (e.g., when to use delete_api_v3_order for spot vs margin).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_margin_order_listAInspect
[DISCOVERY] Margin Account Cancel OCO (TRADE) — Cancel an entire Order List for a margin account - Canceling an individual leg will cancel the entire OCO - Either orderListId or listClientOrderId must be provided Weight(UID): 1 Returns: { orderListId: number, contingencyType: string, listStatusType: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 | |
| orderListId | No | Order list id | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| listClientOrderId | No | A unique Id for the entire orderList |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and successfully discloses rate limiting ('Weight(UID): 1'), return structure preview ('Returns: { orderListId... }'), and side effects (cascading cancellation of OCO legs).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense and front-loaded with the action, but structurally cluttered with metadata tags ('[DISCOVERY]', 'TRADE', 'Weight(UID)') and uses hyphen separators rather than clean sentences, reducing readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for an 8-parameter mutation tool with no output schema; compensates by documenting return values, rate limits, and domain-specific OCO behavior that would otherwise be unknown to the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While input schema has 100% coverage, the description adds essential semantic logic not captured in the schema structure: the conditional requirement that either orderListId or listClientOrderId must be provided, rather than both being optional.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Cancel an entire Order List for a margin account' with specific domain context 'OCO' (One-Cancels-Other), clearly distinguishing it from sibling tools like delete_sapi_v1_margin_order (single orders) and delete_api_v3_order_list (spot trading).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage constraints including the critical requirement that 'Either `orderListId` or `listClientOrderId` must be provided' and notes the cascading behavior that 'Canceling an individual leg will cancel the entire OCO'. Lacks explicit comparison to alternative deletion tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_sub_account_sub_account_api_ip_restriction_ipBInspect
Delete IP List for a Sub-account API Key (For Master Account) — Weight(UID): 3000 Returns: { ipRestrict: string, ipList: string[], updateTime: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Sub-account email | ||
| ipAddress | No | Can be added in batches, separated by commas | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| thirdPartyName | No | third party IP list name | |
| subAccountApiKey | Yes | query parameter: subAccountApiKey (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses rate limit weight (3000) and return structure (ipRestrict, ipList, updateTime), but lacks safety context for the destructive operation (e.g., irreversibility, what happens if IP not found).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is efficient but information-dense. Weight and return value are appended with dashes, making it slightly cluttered. All sentences earn their place, though structure could be clearer.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 7-parameter deletion tool. Description compensates for missing output schema by documenting return fields. Lacks explicit authentication/authorization details beyond 'For Master Account' and 'signature' parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description adds no semantic detail beyond the schema (e.g., no format guidance for signature, no clarification of ipAddress 'add' phrasing).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (Delete) and resource (IP List for Sub-account API Key) with master account scope. Distinguishes from sibling GET/POST IP restriction tools. However, ambiguous whether it removes specific IPs or clears the entire list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use versus alternatives, prerequisites, or preconditions (e.g., checking current IP list before deletion).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_user_data_streamBInspect
Close a ListenKey (USER_STREAM) — Close out a user data stream. Weight: 1
| Name | Required | Description | Default |
|---|---|---|---|
| listenKey | No | User websocket listen key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable rate-limiting context ('Weight: 1') but omits authentication requirements, error conditions (e.g., invalid/expired key), and whether the closure is permanent or reversible.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and brief. However, 'Close a ListenKey' and 'Close out a user data stream' are redundant phrases stating the same action. The 'Weight: 1' suffix is concise technical metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter delete operation with no output schema, the description adequately covers the core action and rate limit cost. However, it lacks critical context regarding which trading context this applies to (spot vs. isolated margin) compared to similar sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description mentions 'ListenKey' which aligns with the parameter name, but adds no additional semantic detail regarding format requirements or validation rules beyond the schema's example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Close') and resource ('ListenKey'/'user data stream'), and includes the 'USER_STREAM' identifier to distinguish from order deletion siblings. However, it fails to differentiate from similar siblings like 'delete_api_v3_user_data_stream' or 'delete_sapi_v1_user_data_stream_isolated'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus the isolated margin variant or API v3 variant. No prerequisites (e.g., having obtained the listenKey from a POST request) are mentioned, and no alternatives are suggested.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sapi_v1_user_data_stream_isolatedBInspect
Close a ListenKey (USER_STREAM) — Close out a user data stream. Weight: 1
| Name | Required | Description | Default |
|---|---|---|---|
| listenKey | No | User websocket listen key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It provides useful rate-limiting information ('Weight: 1') and clarifies this terminates a resource. However, it lacks details on side effects (e.g., whether existing websocket connections are immediately severed, if the operation is idempotent, or authentication requirements).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences. Slight redundancy between 'Close a ListenKey' and 'Close out a user data stream,' but the inclusion of technical metadata (Weight) is efficiently appended. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter deletion tool without output schema, the description covers the essential operation. However, given the 'isolated' context implied by the tool name (isolated margin), the description is incomplete by failing to clarify this specific domain, which is critical for correct tool selection among the 10+ user data stream tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing a baseline of 3. The description mentions 'ListenKey' which maps to the parameter name, but adds no additional semantic context (such as where to obtain the key or that it is mandatory) beyond what the schema already provides ('User websocket listen key').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Close', 'Close out') and resource ('ListenKey', 'user data stream'), including the specific stream type 'USER_STREAM' in parentheses. However, it fails to distinguish from the sibling tool 'delete_sapi_v1_user_data_stream' (non-isolated version), leaving ambiguity about which margin context this applies to despite the 'isolated' in the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., the non-isolated version), nor any prerequisites mentioned. The description does not indicate that this should be called when disconnecting from a websocket or cleaning up resources.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_accountBInspect
Account Information (USER_DATA) — Get current account information. Weight(IP): 20 Returns: { makerCommission: number, takerCommission: number, buyerCommission: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It provides the rate limit weight (20) and partial return structure, which is valuable. However, it lacks explicit disclosure of authentication requirements, read-only safety, or error conditions beyond the implicit USER_DATA classification.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with no wasted words. The structure 'Category — Action. Constraint. Returns: {schema}' is efficient, though the 'Returns' clause is slightly cramped and uses '...' truncation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description partially compensates by listing key return fields (commissions). However, it truncates with '...' leaving the full response shape ambiguous. For a critical account endpoint, this is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score applies. The description does not add semantic context beyond the schema (e.g., it doesn't explain that 'signature' requires HMAC-SHA256 or that 'recvWindow' prevents replay attacks), but the schema is self-sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'current account information' and specifies key return fields (makerCommission, takerCommission, buyerCommission), which helps distinguish it from sibling account endpoints like get_sapi_v1_account_status or get_sapi_v1_account_snapshot. The '(USER_DATA)' tag clarifies the permission scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus the many sibling account information endpoints (e.g., get_sapi_v1_account_info, get_sapi_v1_account_snapshot). The agent cannot determine selection criteria or prerequisites from the description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_account_commissionAInspect
Query Commission Rates (USER_DATA) — Get current account commission rates. Weight: 20 Returns: { symbol: string, standardCommission: { maker: string, taker: string, buyer: string, seller: string }, taxCommission: { maker: string, taker: string, buyer: string, seller: string }, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries the burden. It discloses rate limit ('Weight: 20') and return structure via inline JSON, which adds value. However, it fails to explicitly state this requires authentication (despite USER_DATA tag), is read-only, or mention error conditions/idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense without redundancy. Front-loads purpose, includes rate limit weight, return structure, and prerequisites efficiently. The inline JSON return example is slightly dense but functionally valuable given lack of output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacking formal output schema, the description manually documents the return structure including nested commission objects (standardCommission, taxCommission). Prerequisites are documented. Sufficient for a 3-parameter authenticated read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with all 3 parameters (symbol, timestamp, signature) fully described. The description adds no additional parameter context beyond the schema, warranting the baseline score of 3 for complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Query' and resource 'Commission Rates', clearly identifying this as a commission retrieval tool. The '(USER_DATA)' tag and distinction from trading/order management siblings (delete_*, post_*order*) makes its scope unmistakable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit PREREQUISITE workflow: 'First call exchange info or symbol listing to discover valid trading pairs, then query market data.' This establishes clear sequencing. However, lacks explicit 'when not to use' guidance versus similar account info tools like get_api_v3_account.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_agg_tradesAInspect
Compressed/Aggregate Trades List — Get compressed, aggregate trades. Trades that fill at the time, from the same order, with the same price will have the quantity aggregated. - If fromId, startTime, and endTime are not sent, the most recent aggregate trades will be returned. - Note that if a trade has the following values, this was a duplicate aggregate trade and marked as invalid: p = '0' // price q = '0' // qty f = -1 // first_trade_id l = -1 // last_trade_id Weight(IP): 2
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| fromId | No | Trade id to fetch from. Default gets most recent trades. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| startTime | No | UTC timestamp in ms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description carries substantial burden effectively. Discloses aggregation algorithm, default temporal behavior ('most recent aggregate trades'), rate limiting ('Weight(IP): 2'), and critical data quality indicators (invalid trade markers: p='0', q='0', f=-1, l=-1). Missing: explicit auth requirements or pagination behavior beyond the limit parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense without waste. Uses structural markers (dashes for list items, 'PREREQUISITE:' for workflow) effectively. Technical details (invalid trade field values) are essential for correct interpretation. Slightly verbose format but every clause serves agent decision-making.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter market data endpoint with no output schema, description adequately covers input behavior and partially documents output structure (via invalid trade field explanations). Lacks explicit return type documentation, but the invalid marker explanation (p, q, f, l fields) provides partial output schema context. Sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds valuable interaction logic: explaining that fromId, startTime, and endTime are optional and that omitting all three returns the most recent trades. This temporal filtering context exceeds raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Get compressed, aggregate trades' clearly defines the action and resource. Explicitly distinguishes from regular trades by explaining the aggregation logic ('Trades that fill at the time, from the same order, with the same price will have the quantity aggregated'), making it clear this differs from sibling get_api_v3_trades which returns individual trades.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Strong prerequisite guidance ('First call exchange info or symbol listing to discover valid trading pairs') establishes necessary workflow. Explains default behavior when temporal parameters are omitted. Minor gap: doesn't explicitly contrast when to choose this over get_api_v3_trades or get_api_v3_historical_trades, though the aggregation focus implies use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_all_order_listBInspect
[DISCOVERY] Query all OCO (USER_DATA) — Retrieves all OCO based on provided optional parameters Weight(IP): 20
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| fromId | No | Trade id to fetch from. Default gets most recent trades. | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the rate limit weight (Weight(IP): 20) and security context (USER_DATA), but lacks details on pagination behavior, data retention limits, or what fields are returned in the response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description mixes metadata tags ([DISCOVERY], USER_DATA, Weight(IP)) with functional description in a single sentence. While brief, the formatting is cluttered and front-loading the '[DISCOVERY]' tag reduces immediate clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters and no output schema, the description adequately identifies the OCO-specific resource but lacks explanation of return structure, pagination, or the relationship between startTime/endTime parameters. It meets minimum viability but leaves gaps for a data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline score of 3. The description mentions filtering 'based on provided optional parameters' but adds no semantic meaning beyond what the schema already documents for the 7 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool retrieves OCO (One Cancels Other) orders specifically, distinguishing it from generic order queries. However, it does not differentiate from similar sibling endpoints like 'get_api_v3_order_list' or 'get_api_v3_open_order_list'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'USER_DATA' implying authentication is required, but does not state when to query 'all' OCOs versus 'open' OCOs (get_api_v3_open_order_list) or specific order lists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_all_ordersAInspect
All Orders (USER_DATA) — Get all account orders; active, canceled, or filled.. - If orderId is set, it will get orders >= that orderId. Otherwise most recent orders are returned. - For some historical orders cummulativeQuoteQty will be < 0, meaning the data is not available at this time. - If startTime and/or endTime provided, orderId is not required Weight(IP): 20
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| orderId | No | Order id | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden and delivers: explains pagination behavior ('orders >= that orderId' vs 'most recent'), warns about data availability ('cummulativeQuoteQty will be < 0'), and notes the USER_DATA security context. Missing explicit read-only declaration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with no wasted sentences. Every line serves a purpose: endpoint identification, core function, pagination logic, data caveats, parameter constraints, rate limits, and prerequisites. Minor formatting quirks ('.. -' bullets) don't significantly impede readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter tool with no output schema and no annotations, the description covers input behavior and prerequisites well. However, it lacks any description of the return structure or response format, which would help an agent understand what data structure to expect back.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing a baseline of 3. The description adds value by explaining the logical relationship between orderId and time-based parameters, but doesn't need to compensate for missing schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get all account orders; active, canceled, or filled' with specific verb and resource. It distinguishes itself from sibling get_api_v3_open_orders by explicitly mentioning all order statuses (active, canceled, filled), making the scope unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite ('First call exchange info or symbol listing'), explains parameter dependencies ('If startTime and/or endTime provided, orderId is not required'), and includes rate limiting info ('Weight(IP): 20'). Could be improved by explicitly contrasting when to use get_api_v3_open_orders instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_avg_priceAInspect
Current Average Price — Current average price for a symbol. Weight(IP): 2 Returns: { mins: number, price: string, closeTime: number }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It adds valuable behavioral context by documenting 'Weight(IP): 2' for rate limiting and the return structure '{ mins: number, price: string, closeTime: number }'. However, it omits safety profile (read-only status) and authentication requirements that annotations would typically provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the purpose ('Current Average Price'). Information is dense with no wasted words, though the line break before PREREQUISITE creates slight structural fragmentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description documents the return object structure with field types. Combined with the rate limit weight and usage prerequisite, this provides sufficient context for a single-parameter read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Trading symbol, e.g. BNBUSDT'), the schema fully documents the single parameter. The description references 'for a symbol' but does not add semantic meaning beyond the schema's definition, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Current average price for a symbol' with specific verb and resource. It distinguishes from sibling tools like get_api_v3_ticker_price or get_api_v3_ticker_24hr by specifying 'Average Price' rather than current price or 24-hour statistics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit prerequisite guidance: 'First call exchange info or symbol listing to discover valid trading pairs, then query market data.' This establishes the workflow context, though it does not explicitly name alternative tools for different price types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_depthBInspect
Order Book — | Limit | Weight(IP) | |---------------------|-------------| | 1-100 | 5 | | 101-500 | 25 | | 501-1000 | 50 | | 1001-5000 | 250 | Returns: { lastUpdateId: number, bids: string[][], asks: string[][] }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | If limit > 5000, then the response will truncate to 5000 | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant burden by disclosing IP rate limiting weights (5-250 based on limit ranges) and the exact return JSON structure. This reveals critical operational costs and response format not available in structured fields. Missing only auth requirements and caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The content is information-dense and front-loaded with the resource name, but the formatting mixes markdown tables with inline JSON return structures in a way that reduces readability for parsing. While no sentences are wasted, the structural formatting is suboptimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description successfully compensates by providing rate limit costs, return type documentation, and prerequisite workflow. For a public market data endpoint, this covers the essential operational context needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for both 'symbol' and 'limit'. The description adds the rate limit weight table which contextualizes the cost implications of different limit values, but does not add semantic detail about the symbol parameter beyond the schema's 'Trading symbol' description. Baseline 3 is appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource as 'Order Book' and documents the return structure (bids/asks arrays), but uses terse phrasing rather than a clear verb statement. It does not explicitly distinguish this from sibling market data tools like get_api_v3_ticker or get_api_v3_trades, leaving ambiguity about when to select this specific endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a clear prerequisite workflow ('First call exchange info or symbol listing to discover valid trading pairs, then query market data'), which establishes necessary sequencing. However, it lacks explicit 'when-not-to-use' guidance or comparisons to alternative market data endpoints that might be more appropriate for different use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_exchange_infoBInspect
[DISCOVERY] Exchange Information — Current exchange trading rules and symbol information - If any symbol provided in either symbol or symbols do not exist, the endpoint will throw an error. - All parameters are optional. - permissions can support single or multiple values (e.g. SPOT, ["MARGIN","LEVERAGED"]) - If permissions parameter not provided, the default values will be ["SPOT","MARGIN","LEVERAGED"]. - To display all permissions you need to specify them explicitly. (e.g. SPOT, MARGIN,...) Examples of Symbol Permissions Interpretation from the Response: - [["A","B"]] means you may place an order if you
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| symbols | No | query parameter: symbols (string) | |
| permissions | No | query parameter: permissions (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses error behavior (throws error for invalid symbols) and default parameter values, but omits rate limiting, caching behavior, or safety characteristics that would be essential for a read-only discovery endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description contains useful information but suffers from poor structure—bullet points are mixed into a paragraph format with dashes, and the text ends abruptly mid-sentence ('means you may place an order if you'). The front-loading is adequate with the [DISCOVERY] tag, but the truncation significantly hurts readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that no output schema exists and the description is cut off mid-explanation of response interpretation ('Examples of Symbol Permissions Interpretation'), the definition is incomplete. It attempts to document return value semantics but fails to finish the explanation, leaving agents without full context for the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is adequate, but the description adds significant value by specifying default permission values (["SPOT","MARGIN","LEVERAGED"]), format examples for single vs multiple values, and the constraint that symbols are optional. It also explains the semantics of the permissions parameter beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'Current exchange trading rules and symbol information' with a [DISCOVERY] tag, providing a specific verb and resource. However, it doesn't explicitly differentiate from sibling market data tools like get_api_v3_ticker or get_api_v3_depth, though the 'trading rules' focus provides implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions specific constraints (throws error if symbol doesn't exist, all parameters are optional) and default permission values, which helps with usage. However, it lacks explicit guidance on when to use this versus other market data endpoints or prerequisites for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_historical_tradesAInspect
Old Trade Lookup — Get older market trades. Weight(IP): 10
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| fromId | No | Trade id to fetch from. Default gets most recent trades. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It adds 'Weight(IP): 10' for rate limiting context and specifies 'older' to define data scope, but omits safety profile (read-only confirmation), pagination cursor behavior details, and error handling expectations for this historical data endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured in two efficient sections: the first sentence front-loads purpose and weight, while the second provides prerequisite context. No sentences are wasted on tautology or redundant schema information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple flat schema (3 parameters) and lack of output schema or annotations, the description adequately covers prerequisites and rate limits. However, it lacks safety disclosures and detailed behavioral constraints (e.g., how 'old' the data can be) that would fully contextualize the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting symbol, limit defaults/maximums, and fromId behavior. The description adds minimal parameter semantics beyond the prerequisite's implicit reference to valid symbols, meeting the baseline for well-schematized tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Get older market trades' with the prefix 'Old Trade Lookup', providing a specific verb and resource. It clearly distinguishes this tool from the sibling get_api_v3_trades (recent trades) and get_api_v3_my_trades (user-specific trades) by specifying 'older' market data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The PREREQUISITE section explicitly states 'First call exchange info or symbol listing to discover valid trading pairs,' providing clear workflow sequencing. However, it does not explicitly name alternative tools (e.g., 'use get_api_v3_trades for recent data instead') or specify when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_klinesAInspect
Kline/Candlestick Data — Kline/candlestick bars for a symbol. Klines are uniquely identified by their open time. - If startTime and endTime are not sent, the most recent klines are returned. Weight(IP): 2
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| interval | Yes | kline intervals | |
| timeZone | No | Default: 0 (UTC) | |
| startTime | No | UTC timestamp in ms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses rate limiting ('Weight(IP): 2') and default temporal behavior, but lacks safety classification (read-only vs. write) and does not describe return format, pagination, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two distinct sections (functionality/description and prerequisites). The em-dash formatting is slightly unconventional but does not impede clarity. Every sentence provides actionable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with full schema coverage, the description adequately covers inputs and prerequisites. However, with no output schema provided, the description should ideally describe the kline data structure (OHLCV fields). Missing explicit safety/authorization context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value by explaining the conditional logic for startTime/endTime ('If not sent, the most recent klines are returned') and providing the IP weight context, which aids in parameter usage planning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'Kline/candlestick bars for a symbol' and explains unique identification by open time. However, it does not distinguish from the sibling tool 'get_api_v3_ui_klines', leaving ambiguity about which kline endpoint to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The PREREQUISITE section explicitly states to 'First call exchange info or symbol listing to discover valid trading pairs,' providing clear workflow guidance. It also explains default behavior when time parameters are omitted. Missing explicit mention of alternatives or when NOT to use this vs. siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_my_allocationsAInspect
Query Allocations (USER_DATA) — Retrieves allocations resulting from SOR order placement. Weight: 20 Supported parameter combinations: Parameters Response symbol allocations from oldest to newest symbol + startTime oldest allocations since startTime symbol + endTime newest allocations until endTime symbol + startTime + endTime allocations within the time range symbol + fromAllocationId allocations by allocation ID symbol + orderId allocations related to an order starting with oldest symbol + orderId + fromAllocationId allocations related to an order by allocation ID Note: The time between star
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| orderId | No | Order id | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| fromAllocationId | No | query parameter: fromAllocationId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses 'Weight: 20' (rate limiting), 'USER_DATA' (authentication scope), and explains result ordering logic (oldest to newest, filtering behavior). However, missing error behaviors, pagination cursor details, or response structure description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded and dense, but the text appears truncated ('Note: The time between star'). The parameter combinations section is somewhat wall-of-text formatted but parseable. Contains minimal redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 9 parameters and no output schema, the description adequately covers query construction and prerequisites. However, it lacks description of return values or the truncation leaves a behavioral note incomplete, slightly undermining completeness for a complex filtered query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description adds significant value by detailing parameter interaction semantics (e.g., 'symbol + endTime newest allocations until endTime'), explaining how combinations affect result sets beyond individual parameter definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Retrieves allocations resulting from SOR order placement' — specific verb (retrieves), specific resource (SOR allocations), and distinguishes from siblings like get_api_v3_my_trades or get_api_v3_order by specifying 'allocations' from 'SOR' orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes explicit PREREQUISITE to first call exchange info for valid symbols. Lists specific supported parameter combinations (e.g., 'symbol + startTime' vs 'symbol + orderId') that guide the agent on valid query construction. Lacks explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_my_prevented_matchesAInspect
Query Prevented Matches — Displays the list of orders that were expired because of STP. For additional information on what a Prevented match is, as well as Self Trade Prevention (STP), please refer to our STP FAQ page. These are the combinations supported: * symbol + preventedMatchId * symbol + orderId * symbol + orderId + fromPreventedMatchId (limit will default to 500) * symbol + orderId + fromPreventedMatchId + limit Weight(IP): Case Weight If symbol is invalid: 2 Querying by preventedMatchId: 2 Querying by orderId: 20
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| orderId | No | Order id | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| preventedMatchId | No | query parameter: preventedMatchId (number) | |
| fromPreventedMatchId | No | query parameter: fromPreventedMatchId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of disclosing behavioral traits. It successfully documents rate limiting costs, default pagination values, and read-only nature (implied by 'Query'). Minor gap: does not explicitly confirm idempotency or elaborate on error responses beyond invalid symbol weight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense and front-loaded with purpose. The parameter combinations and prerequisites are structured logically. Minor deduction for formatting density (asterisk lists without markdown spacing) and the Weight section being somewhat compressed, though all content is relevant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description explains the conceptual return (list of expired orders) and references external STP documentation for domain context. Covers prerequisites and rate limits thoroughly. Minor gap: does not describe the structure/fields of the returned order objects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage establishing a baseline of 3. The description adds significant value by explaining valid parameter combinations and dependencies (e.g., that limit defaults to 500 only when using fromPreventedMatchId with orderId), which is semantic information not present in individual parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it 'Displays the list of orders that were expired because of STP' and defines the resource (Prevented Matches/Self Trade Prevention). It effectively distinguishes from sibling tools like get_api_v3_my_trades or get_api_v3_all_orders by specifying the STP-specific scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists valid parameter combinations (symbol + preventedMatchId, symbol + orderId, etc.), states default behaviors (limit defaults to 500), and includes a PREREQUISITE section advising to first call exchange info. Also documents rate limiting weights (Weight(IP): 2 vs 20) for different query patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_my_tradesAInspect
Account Trade List (USER_DATA) — Get trades for a specific account and symbol. If fromId is set, it will get id >= that fromId. Otherwise most recent orders are returned. The time between startTime and endTime can't be longer than 24 hours. These are the supported combinations of all parameters: symbol symbol + orderId symbol + startTime symbol + endTime symbol + fromId symbol + startTime + endTime symbol+ orderId + fromId Weight(IP): 20
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| fromId | No | Trade id to fetch from. Default gets most recent trades. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| orderId | No | This can only be used in combination with symbol. | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limiting ('Weight(IP): 20') and pagination behavior (fromId logic), but fails to explicitly state this is a read-only operation, describe error scenarios, or mention authentication requirements beyond the signature parameter in the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but poorly structured. The parameter combinations run together without clear separators ('symbol symbol + orderId symbol + startTime'), making them hard to read. The prerequisite section is clearly demarcated, but the main description mixes behavioral details, constraints, and rate limits without clear paragraph breaks.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter tool with no output schema and no annotations, the description covers essential operational constraints (rate limits, time windows, parameter dependencies) and prerequisites. It appropriately focuses on input constraints given the absence of output schema, though it could briefly mention the return data structure (list of trades).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds significant value by documenting valid parameter combinations (e.g., 'symbol + orderId', 'symbol + startTime + endTime') and temporal constraints (24-hour max window) that are not evident from the schema alone, though the formatting of these combinations is difficult to parse.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'trades for a specific account and symbol' with specific verb and resource, and the 'USER_DATA' label distinguishes it from public trade endpoints like get_api_v3_trades. However, it ambiguously mentions 'orders' ('most recent orders are returned') when this tool specifically retrieves trades/fills, potentially confusing it with sibling order query tools like get_api_v3_all_orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite ('First call exchange info or symbol listing') and critical constraints including the 24-hour time window limit, valid parameter combinations, and rate limit weight (20). Deducting one point because it doesn't explicitly distinguish when to use this versus margin trades endpoint (get_sapi_v1_margin_my_trades) or clarify the difference between querying trades versus orders.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_open_order_listCInspect
[DISCOVERY] Query Open OCO (USER_DATA) — Weight(IP): 6
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable rate-limiting context ('Weight(IP): 6'), but fails to disclose safety characteristics (read-only vs. destructive), return value structure, or pagination behavior that would be essential for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (single sentence/fragment) and front-loaded with the action. The '[DISCOVERY]' prefix appears to be metadata noise, but the core content efficiently conveys the endpoint purpose and rate limit without redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, and the presence of numerous confusingly similar siblings (get_api_v3_open_orders, get_api_v3_order_list, etc.), the description is incomplete. It lacks explanation of return values, pagination, and crucially, how 'Open OCO' differs from other open order types.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow are all documented). The description adds no additional parameter semantics, but the rubric baseline of 3 applies for high schema coverage scenarios where the schema itself carries the descriptive load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action 'Query' and resource 'Open OCO', and identifies the endpoint as 'USER_DATA'. However, it does not distinguish this tool from siblings like 'get_api_v3_open_orders' or 'get_api_v3_order_list', nor does it explain what 'OCO' (One-Cancels-Other) means for users unfamiliar with trading terminology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The '(USER_DATA)' tag implicitly signals that authentication is required, but the description provides no explicit guidance on when to use this tool versus the many sibling order-querying tools (e.g., get_api_v3_all_order_list, get_api_v3_open_orders). No prerequisites or alternative selection criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_open_ordersAInspect
Current Open Orders (USER_DATA) — Get all open orders on a symbol. Careful when accessing this with no symbol. Weight(IP): - 6 for a single symbol; - 80 when the symbol parameter is omitted;
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses API weight costs (6 vs 80) and implies authentication via 'USER_DATA' and the required signature/timestamp parameters. However, it lacks disclosure of pagination behavior, return data structure, caching, or error conditions that would help predict runtime behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero waste: sentence 1 identifies the operation and data type, sentence 2 provides the critical usage warning, and sentence 3 details the cost structure. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters with 100% schema coverage but no output schema and no annotations, the description adequately covers the primary usage concern (cost/weight implications). However, it lacks documentation of what the tool returns (array of orders, pagination, etc.), which would be helpful since no output schema exists to describe the response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing descriptions for all 4 parameters including examples for symbol and recvWindow. The description text adds implicit confirmation that 'symbol' is optional ('when the symbol parameter is omitted'), but otherwise doesn't add syntax details or usage examples beyond what the schema already provides. Baseline 3 is appropriate given the schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get all open orders on a symbol' with the specific verb 'Get' and resource 'open orders'. It distinguishes from siblings like get_api_v3_order (specific order lookup) and get_api_v3_all_orders (historical orders) by emphasizing 'Current' and 'Open' orders. The USER_DATA tag further clarifies the access scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit cost guidance ('Weight(IP): 6 for a single symbol; 80 when symbol is omitted') and warns 'Careful when accessing this with no symbol'. This effectively guides when to use the symbol parameter versus omitting it, though it doesn't explicitly name alternative tools for different query needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_orderAInspect
Query Order (USER_DATA) — Check an order's status. - Either orderId or origClientOrderId must be sent. - For some historical orders cummulativeQuoteQty will be < 0, meaning the data is not available at this time. Weight(IP): 4 Returns: { symbol: string, orderId: number, orderListId: number, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| orderId | No | Order id | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| origClientOrderId | No | Order id from client |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds: it discloses 'Weight(IP): 4' for rate limiting, warns about historical data availability ('cummulativeQuoteQty will be < 0'), indicates authentication scope via '(USER_DATA)', and sketches return fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with zero filler, though slightly run-on. The structure logically flows from purpose → constraints → rate limit → returns → prerequisites. Every sentence earns its place, though bullet formatting would improve scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a query tool: covers authentication context, rate limiting, return structure hint, and prerequisite workflow. Lacks explicit error scenario documentation, but this is sufficient given the schema completeness and tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds crucial semantic constraints: the XOR requirement between orderId and origClientOrderId ('Either...must be sent'), which JSON schema cannot express. It also contextualizes signature/timestamp within USER_DATA authentication.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verbs ('Query', 'Check') and the resource ('Order'), clearly indicating this retrieves a single order's status. The '(USER_DATA)' tag immediately signals authentication requirements, distinguishing it from public market data tools like get_api_v3_ticker.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance via 'PREREQUISITE: First call exchange info or symbol listing...' and critical parameter logic 'Either `orderId` or `origClientOrderId` must be sent'. Deducting one point as it doesn't explicitly name sibling alternatives (e.g., 'use get_api_v3_all_orders to list multiple orders').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_order_listBInspect
[DISCOVERY] Query OCO (USER_DATA) — Retrieves a specific OCO based on provided optional parameters Weight(IP): 4 Returns: { orderListId: number, contingencyType: string, listStatusType: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| orderListId | No | Order list id | |
| origClientOrderId | No | Order id from client |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit 'Weight(IP): 4' and previews the return structure with field names and types. However, it lacks details on error behavior (e.g., what happens if the OCO doesn't exist), authentication specifics, or whether the parameters are mutually exclusive filters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is dense and information-heavy but cluttered with metadata tags like '[DISCOVERY]' and '(USER_DATA)' that reduce readability. While it efficiently packs rate limits, return types, and purpose into one sentence, the structure could be improved by front-loading the core action and separating metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 simple parameters and no nested objects, the description is minimally adequate. It covers the basic retrieval purpose, rate limiting, and return structure, but leaves gaps in error handling, parameter interaction logic, and sibling differentiation that would be necessary for robust agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is appropriately 3. The description adds minimal semantic value beyond the schema, merely noting that parameters are 'optional' (which aligns with only timestamp/signature being required). It does not clarify the relationship between orderListId and origClientOrderId or provide usage examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Retrieves a specific OCO' with a specific verb and resource (OCO order list). However, it does not differentiate from similar sibling tools like get_api_v3_all_order_list or get_api_v3_open_order_list, which also query OCO orders but with different scope (all vs. open vs. specific). The '[DISCOVERY]' prefix appears to be metadata noise that doesn't aid comprehension.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_api_v3_all_order_list or get_api_v3_open_order_list. It does not indicate whether to use orderListId or origClientOrderId (or both), nor does it mention prerequisites like authentication requirements beyond the implicit 'USER_DATA' tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_pingAInspect
Test Connectivity — Test connectivity to the Rest API. Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It provides valuable operational context by specifying 'Weight(IP): 1', indicating rate limit costs. However, it omits other behavioral traits such as authentication requirements, idempotency, or the structure of the ping response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately compact and front-loaded: 'Test Connectivity' establishes purpose immediately, followed by the rate limit detail. Every word earns its place with no redundant filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description is reasonably complete by including the rate limit weight—a critical operational detail for API health checks. It could be improved by mentioning the expected return payload or authentication behavior, but it suffices for a basic ping endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the scoring rules, zero-parameter tools receive a baseline score of 4, as there are no parameter semantics to describe beyond what the empty schema already conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Test[s] connectivity to the Rest API' using a specific verb and resource. It effectively distinguishes this simple health-check endpoint from the numerous complex trading and account management siblings (e.g., order operations, margin endpoints).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the phrase 'Test Connectivity,' indicating it should be used to verify API reachability. However, it fails to differentiate this tool from similar status-checking siblings like 'get_api_v3_time' or 'get_sapi_v1_system_status', leaving ambiguity about which diagnostic tool to use when.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_rate_limit_orderBInspect
Query Current Order Count Usage (TRADE) — Displays the user's current order count usage for all intervals. Weight(IP): 40
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the 'Weight(IP): 40' cost metric and identifies this as a 'TRADE' endpoint (implying authentication required), but lacks explicit safety declarations (read-only status), error behavior, or return value structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two information-dense sentences. The purpose is front-loaded ('Query Current Order Count Usage'), followed by scope and technical metadata. No redundant or wasteful text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description adequately covers the core concept (querying order count usage) but omits details about the response structure, pagination, or specific authentication requirements. Sufficient for a simple query tool but leaves gaps for implementation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all 3 parameters have descriptions). The description text itself does not elaborate on parameter semantics, syntax, or provide examples beyond what the schema provides. Baseline 3 is appropriate given the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Query', 'Displays') and identifies the resource ('Order Count Usage') with clear scope ('TRADE', 'all intervals'). However, it does not explicitly distinguish from the sibling margin rate limit tool (get_sapi_v1_margin_rate_limit_order) by mentioning 'spot' or comparing functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites like authentication requirements (implied only by 'TRADE' and 'signature' parameter). No comparison to the margin equivalent or other rate limit endpoints is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_tickerBInspect
[DISCOVERY] Rolling window price change statistics — The window used to compute statistics is typically slightly wider than requested windowSize. openTime for /api/v3/ticker always starts on a minute, while the closeTime is the current time of the request. As such, the effective window might be up to 1 minute wider than requested. E.g. If the closeTime is 1641287867099 (January 04, 2022 09:17:47:099 UTC) , and the windowSize is 1d. the openTime will be: 1641201420000 (January 3, 2022, 09:17:00 UTC) Weight(IP): 4 for each requested symbol regardless of windowSize. The weight for this request will cap at 20
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Supported values: FULL or MINI. If none provided, the default is FULL | |
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| symbols | No | query parameter: symbols (string) | |
| windowSize | No | Defaults to 1d if no parameter provided. Supported windowSize values: 1m,2m....59m for minutes 1h, 2h....23h - for hours 1d...7d - for days. Units cannot be combined (e.g. 1d2h is not allowed) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses critical behavioral traits: the window calculation quirk (effective window up to 1 minute wider than requested due to minute-alignment), openTime/closeTime behavior, and rate limiting details (Weight(IP): 4 per symbol, cap at 20). It lacks explicit mention that this is a safe read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description leads with the purpose but then becomes somewhat verbose with the timestamp example. The rate limiting information is valuable but buried at the end. The '[DISCOVERY]' prefix appears to be metadata rather than descriptive content. Structure could be improved by separating behavioral constraints from usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should ideally document return values, which it does not. However, it adequately covers the complex window calculation behavior given the parameter set. For a 4-parameter read tool with no annotations, the description covers operational behavior but leaves output structure undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description provides an example timestamp calculation that implicitly clarifies windowSize behavior, but does not add significant semantic detail beyond what the schema already specifies for the type, symbol, or symbols parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's purpose as retrieving 'Rolling window price change statistics' with specific verb and resource identification. However, it fails to explicitly differentiate from sibling ticker tools (get_api_v3_ticker_24hr, get_api_v3_ticker_price, etc.), leaving the agent to infer from the windowSize parameter that this supports custom time windows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like get_api_v3_ticker_24hr or get_api_v3_ticker_trading_day. There are no stated prerequisites, exclusions, or recommended use cases beyond the technical description of window behavior.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_ticker_24hrAInspect
24hr Ticker Price Change Statistics — 24 hour rolling window price change statistics. Careful when accessing this with no symbol. - If the symbol is not sent, tickers for all symbols will be returned in an array. Weight(IP): - 2 for a single symbol; - 80 when the symbol parameter is omitted;
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Supported values: FULL or MINI. If none provided, the default is FULL | |
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| symbols | No | query parameter: symbols (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses critical behavioral traits: the high IP weight cost (80) when omitting symbol versus single symbol queries (2), and the return format change (array of all symbols vs single object). It does not, however, disclose authentication requirements or detailed rate limiting beyond the weight values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from immediate redundancy ('24hr Ticker Price Change Statistics — 24 hour rolling window price change statistics'). While the weight information is valuable, the formatting with inline dashes and backticks is somewhat messy. The content is front-loaded but could be more efficiently structured without the tautological opening.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description should ideally describe the return structure (e.g., fields like priceChange, weightedAvgPrice, volume). It mentions the array return format when no symbol is provided, but omits details about the ticker data structure. The critical cost/weight warning is present, but the description remains incomplete regarding response semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description adds value by explaining the weight cost implications of the symbol parameter, but it fails to clarify the relationship or distinction between the singular 'symbol' and plural 'symbols' parameters, or provide additional context for the 'type' parameter beyond the schema's basic enum description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource as '24hr Ticker Price Change Statistics' and specifies the '24 hour rolling window' scope. While it defines the tool's specific function adequately, it does not explicitly differentiate from sibling ticker endpoints like get_api_v3_ticker or get_api_v3_ticker_price, though the '24hr' and 'price change' qualifiers provide implicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage warnings ('Careful when accessing this with no symbol') and details the cost implications (Weight(IP): 2 vs 80) when omitting the symbol parameter. However, it lacks guidance on when to use this versus alternative ticker endpoints or whether authentication is required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_ticker_book_tickerAInspect
[DISCOVERY] Symbol Order Book Ticker — Best price/qty on the order book for a symbol or symbols. - If the symbol is not sent, bookTickers for all symbols will be returned in an array. Weight(IP): - 2 for a single symbol; - 4 when the symbol parameter is omitted;
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| symbols | No | query parameter: symbols (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limiting weights and return format variations (array vs single object), but does not address other behavioral traits like caching, update frequency, or whether the data is real-time versus delayed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with the purpose front-loaded, followed by parameter behavior and rate limits. The '[DISCOVERY]' prefix appears to be metadata noise, but the remaining content is dense and actionable with minimal redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description partially compensates by explaining what data is returned (best price/qty) and the format variation (array vs object). However, it lacks details on the specific fields returned (bid price, bid quantity, ask price, ask quantity) which would be necessary for full completeness given the absence of structured output documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds context about omitting the 'symbol' parameter, but fails to clarify the distinction between the 'symbol' and 'symbols' parameters or when to use the plural form for specific multiple symbol queries.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an 'Order Book Ticker' tool that retrieves 'Best price/qty on the order book', distinguishing it from sibling ticker endpoints (e.g., get_api_v3_ticker_24hr, get_api_v3_ticker_price) by specifying it returns order book level data rather than statistical or simple price data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit operational guidance including IP weight costs ('2' for single symbol vs '4' for all symbols) and explains the behavioral difference between providing a symbol (single object) versus omitting it (array of all symbols). However, it lacks explicit comparison to similar ticker tools to help agents choose between them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_ticker_priceAInspect
Symbol Price Ticker — Latest price for a symbol or symbols. - If the symbol is not sent, prices for all symbols will be returned in an array. Weight(IP): - 2 for a single symbol; - 4 when the symbol parameter is omitted;
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| symbols | No | query parameter: symbols (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate-limiting weights (IP: 2 for single, 4 for all) and return format differences (array when no symbol), but omits authentication requirements, caching behavior, or whether the data is real-time vs cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three distinct segments: the title/purpose, the optional parameter behavior, and the rate limiting weights. Information is front-loaded with the core purpose, and every sentence provides actionable information with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only endpoint with 2 parameters and 100% schema coverage, the description is reasonably complete. It covers the primary use case (single vs all symbols) and critical rate-limiting constraints. Without an output schema, it appropriately focuses on input behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, establishing a baseline of 3. The description adds context about omitting the 'symbol' parameter but fails to clarify the relationship between the 'symbol' and 'symbols' parameters (mutually exclusive vs combinable) or when to use the JSON array format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves the 'Latest price for a symbol or symbols' with a specific verb and resource. While it distinguishes implicitly from siblings like get_api_v3_ticker_24hr by focusing on 'latest price' rather than statistics, it doesn't explicitly contrast with related ticker endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance by explaining that omitting the symbol returns all prices in an array, and details the IP weight costs (2 vs 4) to inform rate-limiting decisions. However, it lacks explicit when-to-use guidance comparing it to alternatives like get_api_v3_ticker or get_api_v3_avg_price.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_ticker_trading_dayBInspect
Trading Day Ticker — Price change statistics for a trading day. Notes: - Supported values for timeZone: - Hours and minutes (e.g. -1:00, 05:45) - Only hours (e.g. 0, 8, 4) Weight: - 4 for each requested symbol. - The weight for this request will cap at 200 once the number of symbols in the request is more than 50.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Supported values: FULL or MINI. If none provided, the default is FULL | |
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| symbols | No | query parameter: symbols (string) | |
| timeZone | No | Default: 0 (UTC) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully documents the weight-based rate limiting (4 per symbol, cap at 200) and timezone handling formats, but fails to describe the response structure, authentication requirements, or what specific statistics are returned (OHLC, percentage change, volume, etc.).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The information is front-loaded with the purpose statement first, followed by technical notes. However, the formatting using dashes and backticks for the notes section is somewhat fragmented and inconsistent, detracting slightly from readability while maintaining adequate information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description provides partial compensation through rate limit and timezone details. However, it omits description of return values or data structure, leaving agents uncertain about what specific 'price change statistics' fields will be returned (e.g., open, high, low, close, volume).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds significant value by detailing the timezone format syntax (e.g., '-1:00', '05:45', '0') beyond the schema's simple 'Default: 0' note. It also explains how the 'symbols' parameter affects rate limiting weight, providing crucial usage context not present in the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool retrieves 'Price change statistics for a trading day,' specifying both the action (statistics retrieval) and the temporal scope (trading day vs. rolling 24h). The em-dash format effectively separates the title from the purpose, though it doesn't explicitly contrast with sibling ticker endpoints like get_api_v3_ticker_24hr.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to select this tool over the numerous sibling ticker endpoints (get_api_v3_ticker_24hr, get_api_v3_ticker_price, etc.). While it includes operational rate-limiting details, it lacks conditional logic or use-case scenarios that would help an agent choose this specific ticker variant.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_timeAInspect
Check Server Time — Test connectivity to the Rest API and get the current server time. Weight(IP): 1 Returns: { serverTime: number }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate-limiting weight ('Weight(IP): 1') and documents the return structure ('{ serverTime: number }') despite the absence of a formal output schema. It does not, however, explicitly state the read-only/safe nature of the operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the primary function ('Check Server Time'). Every segment earns its place: the em-dash separates concerns, the weight disclosure informs rate-limit planning, and the return format compensates for missing schema documentation. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple utility endpoint with zero parameters and no annotations, the description is sufficiently complete. It documents the return value (which the schema omits) and rate-limit costs. A minor gap is the lack of explicit read-only safety classification, though this is inferable from the verbs used.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, establishing a baseline score of 4 per the scoring rules. The description appropriately does not fabricate parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the dual purpose: testing connectivity and retrieving server time using specific verbs ('Check', 'Test', 'get'). While it implies differentiation from the sibling 'get_api_v3_ping' by mentioning time retrieval (not just connectivity), it does not explicitly distinguish between these two similar utilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage scenarios (testing connectivity or needing server time) but provides no explicit guidance on when to prefer this over 'get_api_v3_ping' or other health-check alternatives. It lacks 'when-not-to-use' constraints or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_tradesAInspect
Recent Trades List — Get recent trades. Weight(IP): 10
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting ('Weight(IP): 10'), which is valuable behavioral context. However, it omits other critical behavioral details such as data freshness/latency, whether the data is public or requires authentication, or pagination behavior beyond the limit parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear header ('Recent Trades List'), concise action statement, rate limit note, and a clearly separated prerequisite section. Every sentence provides actionable information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 simple parameters, no nested objects, no output schema), the description is reasonably complete. The prerequisite fills a critical gap regarding valid symbol discovery. It could be improved by clarifying the distinction between this and sibling trade endpoints (historical, aggregated, personal) given the rich sibling ecosystem.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds indirect semantic value for the 'symbol' parameter via the prerequisite note about discovering valid trading pairs first, but does not elaborate on parameter formats, validation rules, or the temporal semantics of 'recent' beyond the schema's existing descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'recent trades' with a specific verb (Get) and resource (trades). The term 'recent' helps distinguish it from the sibling 'get_api_v3_historical_trades', though it does not explicitly differentiate from 'get_api_v3_agg_trades' or clarify that these are public market trades versus the private 'my_trades' sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite guidance ('First call exchange info or symbol listing to discover valid trading pairs'), establishing a required sequence for successful invocation. However, it lacks explicit 'when-not-to-use' guidance or direct comparisons to alternative trade endpoints (e.g., historical vs. recent).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v3_ui_klinesBInspect
UIKlines — The request is similar to klines having the same parameters and response. uiKlines return modified kline data, optimized for presentation of candlestick charts. Weight(IP): 2
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| interval | Yes | kline intervals | |
| timeZone | No | Default: 0 (UTC) | |
| startTime | No | UTC timestamp in ms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit ('Weight(IP): 2') and notes the response similarity to klines, but fails to explain what specific modifications are made to the data to optimize it for UI presentation (e.g., rounding, formatting changes) or mention idempotency/cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably compact at four sentences, but the structure is fragmented with the 'UIKlines —' header being redundant with the tool name, and 'PREREQUISITE:' in all caps creating visual noise. The weight information feels isolated on its own line.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description provides the minimum viable context: it references the similar response structure of 'klines' and includes the prerequisite call sequence. However, it should ideally describe the actual return format or fields since no output schema exists to document them.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, establishing a baseline of 3. The description adds value by noting parameters are the 'same' as the klines endpoint, which helps agents transfer knowledge from the sibling tool. However, it does not elaborate on parameter semantics beyond what the schema already provides (e.g., valid interval values).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'modified kline data, optimized for presentation of candlestick charts,' distinguishing it from the sibling tool 'get_api_v3_klines' by specifying the UI optimization use case. However, the opening 'UIKlines —' partially restates the tool name, slightly diluting the specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear prerequisite ('First call exchange info or symbol listing...') which helps with sequencing. However, it lacks explicit guidance on when to choose this over the standard 'klines' endpoint beyond the vague 'optimized for presentation,' missing the opportunity to clarify the decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_account_api_restrictionsBInspect
Get API Key Permission (USER_DATA) — Weight(IP): 1 Returns: { ipRestrict: boolean, createTime: number, enableInternalTransfer: boolean, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of behavioral disclosure. It successfully includes rate limit information ('Weight(IP): 1') and hints at the return structure with specific fields (ipRestrict, enableInternalTransfer), but lacks information on error cases, caching behavior, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely compact and front-loaded with the action verb. It efficiently packs the permission type, rate limit weight, and return structure into a single sentence. The dense notation (e.g., 'Weight(IP): 1') is efficient for technical users though slightly cryptic.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description partially compensates by listing example return fields (ipRestrict, createTime, enableInternalTransfer). However, the ellipsis suggests incomplete return documentation, and with no annotations provided, the description could better explain the security implications of the API restrictions being queried.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score applies. The description does not add semantic meaning beyond what the schema already provides for signature, timestamp, and recvWindow parameters, but the schema is sufficiently documented that additional description is not necessary.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Get API Key Permission' and identifies the specific resource being accessed. It distinguishes from sibling tools like 'get_sapi_v1_sub_account_sub_account_api_ip_restriction' by omitting sub-account references, though it could explicitly specify 'current account' to make the distinction clearer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives, nor any prerequisites mentioned. The 'USER_DATA' tag implies authentication requirements but this is not explained for users unfamiliar with Binance API permission types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_account_api_trading_statusBInspect
Account API Trading Status (USER_DATA) — Fetch account API trading status with details. Weight(IP): 1 Returns: { data: { isLocked: boolean, plannedRecoverTime: number, triggerCondition: { GCR: number, IFER: number, UFR: number }, indicators: { BTCUSDT: { i: string, c: number, v: number, t: number }[] }, updateTime: number } }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds reasonably well. It discloses the rate limit weight ('Weight(IP): 1'), indicates authentication requirements via '(USER_DATA)' tag, and provides a complete JSON return structure showing fields like isLocked, plannedRecoverTime, and triggerCondition (GCR/IFER/UFR) that reveal the tool checks trading restrictions and risk metrics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the essential verb and resource. However, the inline JSON return structure, while valuable due to the lack of output schema, creates a dense single-line format that hinders readability. Every element serves a purpose, but structure could be improved.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema and no annotations, the description compensates effectively by embedding the complete return structure inline, including nested objects (triggerCondition, indicators). It adequately documents what the caller receives, making it functionally complete for a read-only status check tool with three parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting timestamp, signature, and recvWindow adequately. The description adds no parameter-specific context, but since the schema is fully self-documenting, it meets the baseline expectation without penalty.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Fetch' with the resource 'account API trading status', clearly indicating it retrieves trading lock status and risk indicators. It distinguishes from siblings like get_api_v3_account (general balances) and get_sapi_v1_account_status (account status) by focusing specifically on API trading permissions and risk metrics (isLocked, triggerCondition fields).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus similar account information tools like get_sapi_v1_account_status or get_api_v3_account. No prerequisites (beyond implicit USER_DATA auth), exclusions, or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_account_infoAInspect
Account info (USER_DATA) — Fetch account info detail. Weight(IP): 1 Returns: { vipLevel: number, isMarginEnabled: boolean, isFutureEnabled: boolean }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds in disclosing the rate limit weight (Weight(IP): 1) and the exact return structure since no output schema exists. The 'USER_DATA' classification indicates authentication requirements. It lacks error behavior or caching details, but covers the critical behavioral traits for a rate-limited API.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence with zero waste. It front-loads the category (USER_DATA), states the action, provides the rate limit, and documents the return value—every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 3-parameter fetch tool with 100% schema coverage but no output schema, the description is reasonably complete. It compensates for the missing output schema by documenting return values and includes the rate limit weight. It could be improved with error code documentation or cache behavior notes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is appropriately 3. The description does not add semantic details about the parameters beyond what the schema already provides (e.g., it doesn't explain the relationship between signature and timestamp or the purpose of recvWindow).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches account info and specifies exact return fields (vipLevel, isMarginEnabled, isFutureEnabled) that distinguish it from sibling account endpoints like get_api_v3_account. The 'USER_DATA' tag clarifies the endpoint category. However, it doesn't explicitly differentiate when to use this versus the main account endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the many sibling account tools (e.g., get_api_v3_account, get_sapi_v1_account_status). There are no prerequisites mentioned beyond the implicit authentication required by the signature parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_account_snapshotCInspect
Daily Account Snapshot (USER_DATA) — - The query time period must be less than 30 days - Support query within the last one month only - If startTimeand endTime not sent, return records of the last 7 days by default Weight(IP): 2400
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | query parameter: type ("SPOT" | "MARGIN" | "FUTURES") | |
| limit | No | query parameter: limit (number) | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit 'Weight(IP): 2400' and default temporal behavior, which is valuable. However, it omits crucial behavioral details like what the snapshot records contain, response format, or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from poor formatting—mixing em-dashes with bullet dashes ('— -') and lacking clear sentence structure. It contains a typo ('startTimeand'). While information-dense, the presentation is messy and harder to parse than necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and the tool's complexity (7 parameters including authentication), the description is incomplete. It fails to describe the return values, response structure, or what specific account data is returned in the snapshot, leaving significant gaps for an agent trying to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description adds value by explaining the default behavior when startTime/endTime are omitted, though it contains a typo ('startTimeand'). It does not add significant semantic context for other parameters like 'type' or 'signature' beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the tool as retrieving 'Daily Account Snapshot' data, which indicates the general resource. However, it fails to specify what data the snapshot contains (balances, positions, transactions) or distinguish this from sibling tools like get_api_v3_account (current state) versus this historical snapshot endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides operational constraints—'query time period must be less than 30 days,' 'last one month only,' and default behavior (last 7 days if no time range). However, it lacks guidance on when to prefer this over alternatives like get_api_v3_account for real-time data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_account_statusBInspect
Account Status (USER_DATA) — Fetch account status detail. Weight(IP): 1 Returns: { data: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable rate limit info ('Weight(IP): 1') and return type ('Returns: { data: string }'), but lacks explicit read-only declaration or error behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient four-part structure labeling the endpoint, stating purpose, specifying weight, and defining return type. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 3-parameter endpoint with full schema coverage. Compensates for missing output schema by documenting return type, but lacks domain context distinguishing 'account status' from other account attributes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (timestamp, signature, recvWindow all documented), establishing baseline 3. Description adds no additional parameter context, which is acceptable given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Fetch' and resource 'account status', but fails to differentiate from similar sibling tools like get_sapi_v1_account_info or get_api_v3_account.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives, nor any prerequisites or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_algo_futures_historical_ordersAInspect
Query Historical Algo Orders (USER_DATA) — - You need to enable Futures Trading Permission for the api key which requests this endpoint. - Base URL: https://api.binance.com Weight(IP): 1 Returns: { total: number, orders: { algoId: number, symbol: string, side: string, positionSide: string, totalQty: string, executedQty: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| side | No | query parameter: side ("SELL" | "BUY") | |
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| pageSize | No | MIN 1, MAX 100; Default 100 | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant weight. It discloses authentication requirements (Futures Trading Permission), rate limiting ('Weight(IP): 1'), and importantly provides the return structure inline ('Returns: { total: number, orders: ... }') since no output schema exists. It does not mention pagination limits or error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every element carries information (permission, URL, weight, return structure), but the formatting is cramped and inconsistent—using an em-dash followed by hyphens for bullets and inline JSON without code formatting. This reduces scannability despite the high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description adequately compensates by specifying the return payload structure, permission gates, and rate limits. It covers the essential behavioral contract for a historical data query tool, though it could clarify pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'UTC timestamp in ms', 'MIN 1, MAX 100'), so the description is not required to compensate. The description does not add additional parameter semantics, syntax guidance, or cross-parameter relationships (e.g., startTime/endTime pairing) beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Query') and resource ('Historical Algo Orders') with scope ('USER_DATA'). The term 'Historical' distinguishes it from the sibling 'get_sapi_v1_algo_futures_open_orders', though it doesn't explicitly mention 'futures' in the descriptive text (relying on the permission requirement and tool name instead).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a critical prerequisite: 'You need to enable Futures Trading Permission for the api key'. However, it lacks explicit guidance on when to use this versus the 'open_orders' variant or the spot market equivalent, forcing the agent to infer from the 'Historical' keyword.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_algo_futures_open_ordersBInspect
Query Current Algo Open Orders (USER_DATA) — - You need to enable Futures Trading Permission for the api key which requests this endpoint. - Base URL: https://api.binance.com Weight(IP): 1 Returns: { total: number, orders: { algoId: number, symbol: string, side: string, positionSide: string, totalQty: string, executedQty: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses 'Weight(IP): 1' (rate limiting) and provides the return structure inline ('Returns: { total: number, orders:... }'), which compensates for the missing output schema. However, it lacks explicit safety declarations (e.g., confirming read-only status) or error handling details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but poorly structured, mashing bullet points together with dashes ('— - You need...'). While every fragment adds value (auth, rate limit, return type), the lack of clear formatting reduces readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description partially compensates by providing the JSON return structure and auth requirements. However, it omits safety hints, pagination details, and field descriptions for the return object, leaving gaps for a USER_DATA endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow are all documented). Since the schema fully documents the parameters, the description baseline is appropriately 3, though the description itself adds no parameter-specific guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Query Current Algo Open Orders (USER_DATA)', providing a specific verb and resource. It distinguishes from siblings like get_sapi_v1_algo_futures_historical_orders (current vs. historical) and get_sapi_v1_algo_spot_open_orders (futures vs. spot) through the name and 'Current' modifier, though it could explicitly mention these distinctions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It states the prerequisite 'You need to enable Futures Trading Permission for the api key', which is useful auth context. However, it fails to specify when to use this over similar endpoints like get_sapi_v1_algo_futures_historical_orders or get_sapi_v1_algo_futures_sub_orders.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_algo_futures_sub_ordersAInspect
Query Sub Orders (USER_DATA) — - You need to enable Futures Trading Permission for the api key which requests this endpoint. - Base URL: https://api.binance.com Weight(IP): 1 Returns: { total: number, executedQty: string, executedAmt: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| algoId | Yes | query parameter: algoId (number) | |
| pageSize | No | MIN 1, MAX 100; Default 100 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses Weight(IP): 1 (rate limiting), the Base URL, and the return structure (total, executedQty, executedAmt). It also flags the USER_DATA classification and permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information density is appropriate but formatting is awkward with bullet points embedded in prose after an em-dash. All sentences provide value (auth, weight, returns), though front-loading could be improved.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for the lack of structured output schema by documenting the return object structure inline. Covers authentication, rate limits, and pagination parameters. Only gap is the conceptual definition of 'sub orders' versus parent algo orders.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (page, algoId, pageSize constraints, timestamp format, etc.), establishing baseline 3. The description does not add parameter-specific semantics beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'Query Sub Orders (USER_DATA)' with the specific domain identified via 'Futures Trading Permission' requirement, distinguishing it from the Spot variant sibling. However, it does not clarify what constitutes a 'sub order' in the algo trading context or differentiate from siblings like get_sapi_v1_algo_futures_open_orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a clear prerequisite (Futures Trading Permission required) but lacks explicit guidance on when to use this tool versus alternatives like get_sapi_v1_algo_futures_historical_orders or get_sapi_v1_algo_futures_open_orders.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_algo_spot_historical_ordersAInspect
Query Historical Algo Orders — Get all historical SPOT TWAP orders Weight(IP): 1 Returns: { total: number, orders: { algoId: number, symbol: string, side: string, totalQty: string, executedQty: string, executedAmt: string, ... }[] }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| pageSize | No | MIN 1, MAX 100; Default 100 | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the rate limit weight ('Weight(IP): 1') and the complete return structure JSON since no output schema exists. It misses explicit classification as read-only/safe, though 'Query' implies this, and doesn't mention pagination behavior explicitly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two distinct sections (main description and PREREQUISITE). Every sentence provides value—defining the operation, rate limits, return structure, and prerequisites. The JSON return structure is slightly dense but informationally dense without fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description compensates well by documenting the return structure inline and providing prerequisite workflow information. It appropriately covers the 9-parameter complexity, though it could explicitly mention pagination support (page/pageSize parameters).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'Trading symbol, e.g. BNBUSDT', 'UTC timestamp in ms'), establishing a baseline of 3. The description adds minimal parameter-specific context beyond implicitly clarifying that 'symbol' refers to SPOT trading pairs through the TWAP context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Query Historical Algo Orders — Get all historical SPOT TWAP orders', providing a specific verb (Query/Get), resource (historical SPOT TWAP orders), and scope. It clearly distinguishes from siblings like get_sapi_v1_algo_futures_historical_orders (by specifying SPOT) and get_sapi_v1_algo_spot_open_orders (by specifying historical).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The PREREQUISITE section provides explicit workflow guidance ('First call exchange info or symbol listing...'), indicating when preparatory steps are needed. However, it lacks explicit 'when not to use' guidance or direct references to alternatives like the open orders endpoint for non-historical queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_algo_spot_open_ordersAInspect
Query Current Algo Open Orders — Get all open SPOT TWAP orders Weight(IP): 1 Returns: { total: number, orders: { algoId: number, symbol: string, side: string, totalQty: string, executedQty: string, executedAmt: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full behavioral disclosure burden. It documents the return structure (crucial since no output schema exists) and rate-limit weight ('Weight(IP): 1'), but fails to explicitly state read-only safety, authentication requirements beyond parameter names, or error behaviors. The term 'Query' implies read-only but this is not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Dense but efficient single-line format front-loaded with action ('Query...'). Every segment earns its place: purpose declaration, scope restriction (SPOT TWAP), operational constraint (Weight), and return structure. The return object syntax is lengthy but necessary given the absence of an output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and structured output schema, the description compensates adequately by documenting the return object structure ({ total: number, orders: {...} }). Combined with complete input schema coverage, this provides sufficient context for a query tool, though explicit auth/safety statements would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (signature, timestamp, recvWindow all documented). The description adds no parameter-specific semantics beyond what the schema provides, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb+resource combination ('Query Current Algo Open Orders — Get all open SPOT TWAP orders') clearly identifies the tool retrieves active TWAP orders. Explicitly scopes to 'SPOT' (distinguishing from futures siblings like get_sapi_v1_algo_futures_open_orders) and 'TWAP' (distinguishing from other algo types like VP), while 'Current'/'Open' differentiates from historical order queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this versus sibling alternatives (e.g., get_sapi_v1_algo_spot_historical_orders for completed orders, or get_sapi_v1_algo_spot_sub_orders for order details). Mentions 'Weight(IP): 1' which implies rate-limiting considerations but does not constitute usage guidance relative to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_algo_spot_sub_ordersBInspect
Query Sub Orders — Get respective sub orders for a specified algoId Weight(IP): 1 Returns: { total: number, executedQty: string, executedAmt: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| algoId | Yes | query parameter: algoId (number) | |
| pageSize | No | MIN 1, MAX 100; Default 100 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit weight (Weight(IP): 1) and partially documents the return structure with field types ({ total: number, executedQty: string... }), compensating for the lack of output schema. However, it omits safety profile (read-only vs destructive), authentication requirements, and pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely compact single-sentence structure with information front-loaded ('Query Sub Orders'). Technical metadata (weight, returns) appended efficiently. No redundant or wasted language, though punctuation (em-dashes) is slightly unconventional.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description provides essential technical details (rate limit, return fields). However, for a 6-parameter algo-trading tool, it lacks context on what constitutes a 'sub order', error handling behavior, or the relationship between sub-orders and parent algo orders.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the primary filter parameter ('for a specified algoId') but does not add additional semantic context beyond what the schema already provides for page, pageSize, timestamp, signature, or recvWindow.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it queries sub-orders for a specified algoId using the verb 'Get'. It distinguishes this tool from siblings like 'historical_orders' or 'open_orders' by specifying 'sub orders', though it fails to explicitly mention 'spot' (vs futures) in the description text itself, relying solely on the tool name for that distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus get_sapi_v1_algo_spot_historical_orders or get_sapi_v1_algo_spot_open_orders. No mention of prerequisites (e.g., needing a valid algoId from a parent order) or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_asset_detailAInspect
Asset Detail (USER_DATA) — Fetch details of assets supported on Binance. - Please get network and other deposit or withdraw details from GET /sapi/v1/capital/config/getall. Weight(IP): 1 Returns: { CTR: { minWithdrawAmount: string, depositStatus: boolean, withdrawFee: number, withdrawStatus: boolean, depositTip: string } }.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | query parameter: asset (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses USER_DATA (authentication required), Weight(IP): 1 (rate limiting cost), and provides concrete return structure example showing depositStatus, withdrawStatus, and fee fields. Does not explicitly state read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded but formatting is cluttered. The return value JSON is inlined as a dense blob, and 'Asset Detail' prefix restates the function name. All sentences serve a purpose but structure could be cleaner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no formal output schema, description provides detailed return value example showing the nested structure with withdrawal and deposit fields. Covers authentication type, rate limits, and scope limitations comprehensively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with all 4 parameters documented. Description does not add semantic meaning beyond schema definitions, warranting baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'Fetch details of assets supported on Binance' with specific verb and resource. Explicitly distinguishes from sibling tool get_sapi_v1_capital_config_getall by stating network/deposit/withdraw details should come from that alternative instead.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when NOT to use this tool: 'Please get network and other deposit or withdraw details from GET /sapi/v1/capital/config/getall', clearly directing to the correct alternative for those specific data needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_asset_dividendBInspect
Asset Dividend Record (USER_DATA) — Query asset Dividend Record Weight(IP): 10 Returns: { rows: { id: number, amount: string, asset: string, divTime: number, enInfo: string, tranId: number }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | query parameter: asset (string) | |
| limit | No | query parameter: limit (number) | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the rate limit weight ('Weight(IP): 10') and provides the complete return structure inline since no output schema exists. However, it fails to mention safety characteristics (read-only), error conditions, or data retention limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense sentence mixing title-like text, technical metadata (Weight IP), and JSON return structures. While not verbose, it lacks proper structure and front-loading of the most important information (the return schema appears at the end).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 7 parameters and authentication requirements, the description partially compensates for missing output schema by documenting the return structure. However, it lacks explanation of pagination behavior (despite including 'total'), rate limit implications, or authentication requirements beyond the 'USER_DATA' label.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds minimal semantic value beyond the schema, merely implying that the 'asset' parameter filters dividends by asset type. No additional context is provided for timestamp/signature requirements or time range semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's purpose with the verb 'Query' and resource 'asset Dividend Record'. It distinguishes from sibling asset tools (like get_sapi_v1_asset_asset_detail) by specifying 'Dividend'. However, it buries this under technical metadata like '(USER_DATA)' and 'Weight(IP): 10', slightly reducing clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidelines are provided. There is no indication of when to use this tool versus other asset history tools, no prerequisites mentioned (beyond implied auth), and no guidance on time range constraints or pagination behavior despite the 'total' field in returns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_convert_transfer_query_by_pageBInspect
Query Convert Transfer (USER_DATA) — Weight(UID): 5 Returns: { total: number, rows: { tranId: number, type: number, time: number, deductedAsset: string, deductedAmount: string, targetAsset: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | If it is blank, we will match deducted asset and target asset. | |
| tranId | No | The transaction id | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | Yes | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | Yes | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| accountType | No | MAIN: main account. CARD: funding account. If it is blank, we will query spot and card wallet, otherwise, we just query the corresponding wallet |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight ('Weight(UID): 5') and authentication scope ('USER_DATA'). It also includes an inline return structure description (total/rows), compensating for the lack of output schema. However, it omits safety details like read-only nature, pagination limits, or error scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently packed into a single line with a consistent structure: Action (AuthType) — RateLimit Returns: {Schema}. No extraneous words. Minor readability issues due to density, but information is front-loaded with the operation type.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 10-parameter paginated query with no output schema, the description provides the essential return structure inline and identifies the operation type. However, it lacks explanation of the convert transfer domain model, pagination behavior details, and filtering logic (e.g., how 'asset' matches both deducted and target assets).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 10 parameters including constraints like 'Max:100' for size and 'cannot be greater than 60000' for recvWindow. The description adds no additional parameter semantics beyond the schema, meeting the baseline expectation when schema coverage is comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Query') and resource ('Convert Transfer'), and the tool name indicates pagination ('by_page'). It distinguishes from the sibling 'post_sapi_v1_asset_convert_transfer' by being a read/query operation. However, it assumes familiarity with 'Convert Transfer' as a Binance-specific feature without explaining the business concept.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'get_sapi_v1_convert_trade_flow' or 'get_sapi_v1_asset_transfer'. Does not mention prerequisites such as requiring the 'startTime' and 'endTime' parameters to be within valid ranges, or that this queries historical convert transfers specifically.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_custody_transfer_historyCInspect
Query User Delegation History(For Master Account) (USER_DATA) — Query User Delegation History Weight(IP): 60 Returns: { total: number, rows: { clientTranId: string, transferType: string, asset: string, amount: string, time: number }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| type | No | query parameter: type (string) | |
| asset | Yes | query parameter: asset (string) | |
| Yes | query parameter: email (string) | ||
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | Yes | query parameter: endTime (number) | |
| signature | Yes | Signature | |
| startTime | Yes | query parameter: startTime (number) | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of disclosing behavioral traits. It successfully includes the rate limit weight (60) and the complete return structure inline (total/rows format), which compensates for the missing output schema. However, it lacks details on pagination behavior, error cases, or the specific semantics of 'delegation' versus regular transfers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a dense, run-on sentence that repeats 'Query User Delegation History' twice. It crams metadata (USER_DATA, Weight(IP), return JSON) into a single string without clear structure, making it difficult to parse despite being information-dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 10 parameters, no annotations, and no output schema, the description partially compensates by providing the return structure inline. However, it fails to explain business logic (what constitutes a 'delegation' transfer), pagination patterns, or authentication requirements beyond the 'USER_DATA' tag.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 10 parameters including constraints like 'Default:10 Max:100' for size. The description adds no additional parameter semantics (e.g., valid values for 'type', timestamp format details), meeting the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool queries 'User Delegation History' and specifies it is 'For Master Account', which provides scope. However, it does not explain what 'delegation' means in this context (custody transfers) or clearly distinguish this from sibling transfer history tools like get_sapi_v1_asset_transfer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions '(For Master Account)' implying a scope restriction, but provides no explicit guidance on when to use this tool versus other transfer/history endpoints, nor does it mention prerequisites or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_dribbletCInspect
DustLog(USER_DATA) — Weight(IP): 1 Returns: { total: number, userAssetDribblets: { operateTime: number, totalTransferedAmount: string, totalServiceChargeAmount: string, transId: number, userAssetDribbletDetails: { transId: number, serviceChargeAmount: string, amount: string, operateTime: number, transferedAmount: string, fromAsset: string }[] }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| accountType | No | SPOT or MARGIN, default SPOT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of disclosing behavioral traits. It provides the rate limit 'Weight(IP): 1' and documents the complex nested return structure in detail, which compensates partially for the missing output schema. However, it omits safety context (read-only nature, authentication requirements) beyond the 'USER_DATA' label.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense block mixing endpoint metadata, rate limits, and a large JSON return structure. While information-dense, it lacks structure—returning the JSON blob inline makes it hard to parse, and key information (purpose, usage) is buried or absent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by documenting the return structure (total, userAssetDribblets array, nested details). However, it fails to explain the business logic (what constitutes a dribblet, conversion mechanics) or provide usage examples, leaving conceptual gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'UTC timestamp in ms', 'SPOT or MARGIN'). Since the schema fully documents parameters, the baseline score applies. The description adds no additional parameter context (e.g., valid ranges, interaction between startTime/endTime).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource as 'DustLog(USER_DATA)' which maps to the tool name, but fails to explain what 'dust' or 'dribblet' means (small asset balances) or what operation is performed (retrieving conversion history). It lacks the explicit verb and conceptual explanation needed for clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus sibling asset endpoints like get_sapi_v1_asset_asset_detail or post_sapi_v1_asset_dust. No mention of prerequisites, such as needing to perform dust conversions first, or filtering guidance for the time range parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_ledger_transfer_cloud_mining_query_by_pageBInspect
Get Cloud-Mining payment and refund history (USER_DATA) — The query of Cloud-Mining payment and refund history Weight(UID): 600 Returns: { total: number, rows: { createTime: number, tranId: number, type: number, asset: string, amount: string, status: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | If it is blank, we will query all assets | |
| tranId | No | The transaction id | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | Yes | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | Yes | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| clientTranId | No | The unique flag |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant weight by disclosing the rate limit cost 'Weight(UID): 600' and documenting the full return structure inline: '{ total: number, rows: {...}[] }'. This compensates for the missing output schema. However, it doesn't mention pagination behavior or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the purpose but contains redundancy ('Get... — The query of...'). The inline JSON return structure is information-dense but somewhat difficult to parse, serving as efficient documentation despite formatting limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters with 100% schema coverage and no structured output schema, the description adequately compensates by documenting the return format and weight cost. For a paginated query tool, it provides sufficient context despite lacking explicit pagination guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters like startTime, asset, and tranId are fully documented in the schema itself. The description adds no additional parameter semantics, but given the comprehensive schema coverage, this meets the baseline expectation without penalty.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get Cloud-Mining payment and refund history' with a specific verb and resource. The '(USER_DATA)' tag clarifies authentication scope. It implicitly distinguishes from sibling mining tools (e.g., get_sapi_v1_mining_payment_list) by specifying 'Cloud-Mining' specifically, though it could explicitly state this distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'USER_DATA' indicating authentication requirements, but provides no guidance on when to use this tool versus other mining-related history endpoints like get_sapi_v1_mining_payment_list. No prerequisites or alternative selection criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_trade_feeCInspect
Trade Fee (USER_DATA) — Fetch trade fee Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight ('Weight(IP): 1') and authentication type ('USER_DATA'), but omits other behavioral traits like read-only safety, response data structure, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with no wasted words. Information is front-loaded, though the brevity comes at the cost of omitting potentially useful context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description should explain the return value structure (e.g., maker/taker fee rates). It fails to do so, leaving the agent unaware of what data the tool returns despite having complete input schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds no additional semantic context for the parameters (e.g., explaining that omitting 'symbol' returns fees for all symbols) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action ('Fetch trade fee') but lacks specificity regarding scope (single symbol vs. all symbols) and fails to differentiate from similar sibling tools like 'get_api_v3_account_commission'. It essentially restates the tool name with minimal expansion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites for invocation, or conditional logic (e.g., when to provide the optional 'symbol' parameter vs. omitting it).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_transferAInspect
Query User Universal Transfer History (USER_DATA) — - fromSymbol must be sent when type are ISOLATEDMARGIN_MARGIN and ISOLATEDMARGIN_ISOLATEDMARGIN - toSymbol must be sent when type are MARGIN_ISOLATEDMARGIN and ISOLATEDMARGIN_ISOLATEDMARGIN - Support query within the last 6 months only - If startTime and endTime not sent, return records of the last 7 days by default Weight(IP): 1 Returns: { total: number, rows: { asset: string, amount: string, type: string, status: string, tranId: number, timestamp: number }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| type | Yes | Universal transfer type | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| toSymbol | No | Must be sent when type are MARGIN_ISOLATEDMARGIN and ISOLATEDMARGIN_ISOLATEDMARGIN | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| fromSymbol | No | Must be sent when type are ISOLATEDMARGIN_MARGIN and ISOLATEDMARGIN_ISOLATEDMARGIN | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It effectively discloses time window limitations (6 months max, 7 days default), rate limiting (Weight(IP): 1), and return structure (inline JSON schema showing total/rows format). Could improve by mentioning authentication requirements explicitly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information density is high with no wasted sentences, but formatting is messy: inconsistent bullet formatting (using '—' then dashes), inline JSON blob for return type, and dense packing reduce readability. Not ideally front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 10-parameter query tool with no structured output schema, the description adequately covers return format, pagination context (current/size), temporal constraints, and parameter interdependencies. Missing only explicit auth/permission notes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage, the description adds critical value by defining conditional relationships between parameters (specific type enum values triggering fromSymbol/toSymbol requirements) that the schema cannot express structurally. Also clarifies temporal defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Query' and resource 'User Universal Transfer History' with USER_DATA classification. Distinguishes from sibling 'post_sapi_v1_asset_transfer' (create vs query) via verb choice, though it does not explicitly differentiate from 'get_sapi_v1_sub_account_universal_transfer' in the text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific parameter usage rules (when fromSymbol/toSymbol are required based on type values) and time range constraints. However, lacks explicit guidance on when to choose this tool over sibling alternatives like the sub-account transfer history endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_asset_wallet_balanceBInspect
Query User Wallet Balance (USER_DATA) — Query User Wallet Balance Weight(IP): 60
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable rate-limiting context ('Weight(IP): 60') which is critical behavioral information not found in the schema. However, it lacks details on return format, error conditions, or side effects expected from this read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but suffers from wasteful repetition: 'Query User Wallet Balance' appears twice with only '(USER_DATA)' and 'Weight(IP): 60' appended. This redundancy suggests poor structuring rather than purposeful conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter query tool with 100% schema coverage, the description is minimally adequate. However, given the lack of output schema and no annotations, it should ideally describe the returned wallet balance structure or at least specify which asset wallet (spot/funding) is being queried.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (timestamp, signature, recvWindow are all documented). The description adds no additional parameter semantics beyond the schema, which warrants the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool queries user wallet balance with a clear verb ('Query') and resource ('User Wallet Balance'). However, it fails to differentiate from similar sibling tools like 'get_api_v3_account' or 'get_sapi_v1_account_info', leaving ambiguity about which wallet type (spot, margin, funding) this specifically targets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. The '(USER_DATA)' tag implies authentication requirements but does not specify prerequisites or appropriate use cases compared to other account balance endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_blvt_redeem_recordBInspect
Redemption Record (USER_DATA) — - Only the data of the latest 90 days is available Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | query parameter: id (number) | |
| limit | No | default 1000, max 1000 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| tokenName | No | BTCDOWN, BTCUP | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully communicates the 90-day data retention limit and IP rate limiting (Weight: 1), but omits details about authentication requirements, pagination behavior, and what constitutes a 'redemption record' in the response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief with no redundant text, but suffers from awkward formatting (em-dash followed by hyphen: '— -') that slightly hinders readability. Every fragment conveys necessary information (resource type, auth level, retention, rate limit).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters, no output schema, and zero annotations, the description is insufficient. It fails to explain what BLVT redemption records contain, how they relate to the subscription workflow, or what the caller receives beyond the 90-day constraint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'UTC timestamp in ms', 'default 1000, max 1000'), so the description is not required to compensate. It adds no additional parameter context, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource as 'Redemption Record' and indicates it requires user authentication via '(USER_DATA)', but lacks a specific verb (Get/Retrieve/Query) and fails to distinguish from the sibling 'get_sapi_v1_blvt_subscribe_record' or explain what BLVT (Binance Leveraged Token) redemption entails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a concrete usage constraint ('Only the data of the latest 90 days is available') and rate limit information ('Weight(IP): 1'), but offers no guidance on when to use this versus the subscribe record endpoint or the POST redeem endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_blvt_subscribe_recordBInspect
Query Subscription Record (USER_DATA) — - Only the data of the latest 90 days is available Weight(IP): 1 Returns: { id: number, tokenName: string, amount: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | query parameter: id (number) | |
| limit | No | Default 500; max 1000. | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| tokenName | No | BTCDOWN, BTCUP | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses the IP weight (1), data retention window (90 days), and partial return structure. However, it omits safety characteristics (read-only status), pagination behavior for the limit parameter, and error handling patterns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the core action but has awkward formatting ('— -' mixing em-dash and hyphen). Information density is high with no wasted sentences, though the return type documentation is truncated with '...' which reduces utility.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description partially compensates by documenting the return object structure (id, tokenName, amount), but leaves gaps regarding pagination behavior, complete field enumeration, and the exact relationship between input filters (startTime/endTime) and the 90-day constraint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 8 parameters including timestamp, signature, and tokenName constraints. The description focuses on return values rather than input parameters, but the comprehensive schema makes additional description unnecessary, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Query Subscription Record' with the USER_DATA permission tag, clearly indicating it retrieves BLVT subscription history. However, it doesn't explicitly distinguish from sibling tools like post_sapi_v1_blvt_subscribe (which creates subscriptions) or get_sapi_v1_blvt_redeem_record (which queries redemptions instead of subscriptions).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes the 90-day data retention limit ('Only the data of the latest 90 days is available'), but provides no guidance on when to use this tool versus alternatives like the redeem record endpoint, nor does it mention prerequisites like required permissions or API key setup.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_blvt_token_infoCInspect
BLVT Info (MARKET_DATA) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| tokenName | No | BTCDOWN, BTCUP |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It provides 'MARKET_DATA' (implying public, unauthenticated access) and 'Weight(IP): 1' (rate limit cost), which are useful behavioral hints. However, it fails to disclose return value structure, pagination behavior, or whether the data is real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (single fragment), avoiding verbosity. However, it reads like auto-generated metadata rather than a structured description, and the information density is low—consisting of a tautology, category label, and rate limit weight without explanatory prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional string parameter, no nested objects), the description is still inadequate. It omits what 'BLVT' stands for (Binance Leveraged Token), what fields are returned (critical since no output schema exists), and how the tokenName parameter filters results when omitted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage (tokenName is documented with example values 'BTCDOWN, BTCUP'). The description adds no parameter guidance, but with complete schema documentation, it meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'BLVT Info' tautologically restates the tool name without specifying what information is returned (e.g., token metadata, leverage ratios, fees). While '(MARKET_DATA)' indicates the API category, it doesn't explain the tool's specific function or how it differs from sibling BLVT tools like get_sapi_v1_blvt_user_limit or get_sapi_v1_blvt_subscribe_record.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., get_sapi_v1_blvt_user_limit). No mention of prerequisites, authentication requirements, or whether this retrieves real-time vs. static token configuration data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_blvt_user_limitCInspect
BLVT User Limit Info (USER_DATA) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| tokenName | No | BTCDOWN, BTCUP | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It provides rate limiting weight ('Weight(IP): 1') and hints at authentication requirements ('USER_DATA'), but fails to disclose what limits are checked, whether the call is idempotent, or what permissions are required beyond the cryptic USER_DATA label.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse at one sentence/fragment with no wasted words. However, it prioritizes rate limit metadata ('Weight(IP): 1') over explanatory content, making it overly concise for effective agent comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description should explain return values and behavioral context. It fails to describe what limit data is returned (e.g., remaining subscription quota, maximum redemption amounts) or how this differs from general token information endpoints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear parameter descriptions including examples for tokenName (BTCDOWN, BTCUP) and constraints for recvWindow. The description adds no parameter guidance, but baseline 3 is appropriate when schema documentation is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource as 'BLVT User Limit Info' which distinguishes it from sibling token info and record tools, but 'Info' is vague. It doesn't specify what limits are returned (subscription caps, redemption limits, holdings) or what BLVT stands for, leaving ambiguity about the tool's exact function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus related BLVT tools like get_sapi_v1_blvt_token_info or the subscription/redeem record endpoints. No mention of prerequisites or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_bnb_burnBInspect
Get BNB Burn Status(USER_DATA) — Weight(IP): 1 Returns: { spotBNBBurn: boolean, interestBNBBurn: boolean }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the rate limit weight ('Weight(IP): 1') and the exact return structure, which adds value beyond the input schema. However, it lacks disclosure on authentication requirements, idempotency, or error handling behaviors that would help an agent invoke it safely.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the essential action and resource. It efficiently packs the return structure and weight information into a single line. However, the dense formatting (lacking spaces or line breaks between distinct concepts like weight and returns) slightly impedes readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates adequately by documenting the return structure inline. For a simple read operation with three documented parameters, the description covers the essential behavioral and return value context needed for invocation, though it could better explain the business logic of BNB burning.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (timestamp, signature, and recvWindow are all documented in the schema), the baseline score applies. The description does not add additional semantic meaning, examples, or format details beyond what the schema already provides for these parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action ('Get') and resource ('BNB Burn Status') clearly, including the return structure ({ spotBNBBurn, interestBNBBurn }) which clarifies the scope. However, it does not explain what 'BNB Burn' means (fee discount mechanism) nor explicitly differentiate from the sibling 'post_sapi_v1_bnb_burn' tool within the description text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites for authentication (beyond the implicit 'USER_DATA' label), or when to prefer the POST counterpart for updating settings. It merely states what the tool retrieves.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_c2c_order_match_list_user_order_historyAInspect
Get C2C Trade History (USER_DATA) — - If startTimestamp and endTimestamp are not sent, the recent 30-day data will be returned. - The max interval between startTimestamp and endTimestamp is 30 days. Weight(IP): 1 Returns: { code: string, message: string, data: { orderNumber: string, advNo: string, tradeType: string, asset: string, fiat: string, fiatSymbol: string, ... }[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| rows | No | default 100, max 100 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| tradeType | Yes | query parameter: tradeType ("BUY" | "SELL") | |
| recvWindow | No | The value cannot be greater than 60000 | |
| endTimestamp | No | UTC timestamp in ms | |
| startTimestamp | No | UTC timestamp in ms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses key behavioral traits: default data window behavior, interval constraints, rate limiting costs, and return structure (including nested data array). It effectively compensates for missing annotation hints regarding read-only behavior and rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but poorly formatted (bullet points run together with dashes, JSON return structure inline). While no sentences are wasted, the return type documentation would be better served by an output schema rather than an inline JSON blob.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately documents return values inline and covers parameter constraints. However, it omits explicit authentication guidance (relying on 'signature' parameter and 'USER_DATA' implication) and error handling scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds crucial semantic context about the relationship between startTimestamp and endTimestamp (30-day max interval, default recent 30-day behavior) that pure schema descriptions cannot convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource ('C2C Trade History') and operation ('Get'), with the 'USER_DATA' tag indicating authenticated access. It distinguishes from siblings (e.g., margin, futures, spot histories) by specifying 'C2C' (customer-to-customer) trading context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides important temporal constraints (30-day default window, max 30-day interval between timestamps) and rate limit weight ('Weight(IP): 1'), but lacks explicit guidance on when to choose this over other order history tools or authentication requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_config_getallBInspect
All Coins' Information (USER_DATA) — Get information of coins (available for deposit and withdraw) for user. Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Includes 'Weight(IP): 10' indicating rate limit cost, and 'USER_DATA' hints at authentication requirements. However, with no annotations provided, it fails to explicitly state this is read-only, safe to call, or describe caching recommendations. The schema requires 'signature', confirming auth needs, but the description doesn't explain error modes or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient two-part structure with a title-like prefix and functional description. The 'Weight(IP)' append is technical but informative. No redundant sentences, though 'for user' at the end is slightly awkward.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the simple 3-parameter input structure, but lacking response documentation. Since no output schema exists and the tool returns complex coin configuration data (networks, fees, limits), the description should ideally enumerate key return fields or structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (signature, timestamp, recvWindow all documented). The description adds no parameter-specific guidance beyond the schema, meeting the baseline expectation when structured data is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves coin information specifically for deposit/withdraw availability, using specific verb 'Get' and resource 'coins'. Distinguishes from sibling transaction endpoints (e.g., deposit_hisrec, withdraw_history) by focusing on configuration data rather than history or actions, though it could explicitly mention this returns the master list of supported coins.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus other capital endpoints (like get_sapi_v1_capital_deposit_address) or prerequisites. While 'USER_DATA' implies authentication, there is no explicit 'use this first to get coin list before requesting addresses' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_contract_convertible_coinsAInspect
[DISCOVERY] Query auto-converting stable coins (USER_DATA) — Get a user's auto-conversion settings in deposit/withdrawal Weight(UID): 600' Returns: { convertEnabled: boolean, coins: string[], exchangeRates: { USDC: string, TUSD: string, USDP: string } }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and successfully discloses: (1) authentication requirement via '(USER_DATA)' tag, (2) rate limit cost 'Weight(UID): 600', and (3) return value structure with specific fields (convertEnabled, coins, exchangeRates).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently packed with metadata ([DISCOVERY]), purpose, rate limit, and return schema. Slightly dense formatting with inline JSON return value, but every element serves a necessary function without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter read operation with no structured output schema, the description adequately compensates by documenting the return structure inline. Rate limit and auth context are included, making it complete for its complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, establishing baseline 4. No parameter documentation is required, and the description correctly does not invent parameters where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Query', 'Get') with clear resource ('auto-conversion settings', 'stable coins') and scope ('deposit/withdrawal'). The 'Get' prefix and explicit verb distinguish it from sibling 'post_sapi_v1_capital_contract_convertible_coins' which modifies these settings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'USER_DATA' tag and the GET operation nature, but lacks explicit when-to-use guidance or direct comparison to the POST sibling alternative. The distinction relies on naming convention rather than descriptive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_deposit_addressAInspect
Deposit Address (supporting network) (USER_DATA) — Fetch deposit address with network. - If network is not send, return with default network of the coin. - You can get network and isDefault in networkList in the response of Get /sapi/v1/capital/config/getall (HMAC SHA256). Weight(IP): 10 Returns: { address: string, coin: string, tag: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Coin name | |
| network | No | query parameter: network (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden effectively. It discloses the authentication requirement ('USER_DATA', 'HMAC SHA256'), rate limit weight ('Weight(IP): 10'), and the return structure ('Returns: { address: string, coin: string, tag: string, ... }'). It also explains the conditional behavior when network is unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but structurally fragmented, using dashes for bullet points within a single text block. It contains a typo ('send' instead of 'sent'). While every piece of information is relevant, the run-on structure and error detract from clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description compensates well by documenting the return object structure, rate limits, and authentication requirements. It covers the 5 parameters adequately and provides necessary context for the tool's safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds significant semantic value. It explains that the network parameter is optional with default behavior, and crucially, it directs users to the config/getall endpoint to discover valid network values and isDefault flags, which is essential context not present in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Fetch') and resource ('deposit address') and clarifies the scope ('with network'). The singular 'address' distinguishes it from the sibling 'get_sapi_v1_capital_deposit_address_list', while the explicit mention of network support differentiates its functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on the network parameter's default behavior when omitted ('return with default network'). It also cross-references the sibling tool 'get_sapi/v1/capital/config/getall' for discovering valid network values, which aids in correct usage. However, it does not explicitly contrast when to use this versus the 'list' or 'sub' address variants.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_deposit_address_listBInspect
[DISCOVERY] Fetch deposit address list with network (USER_DATA) — Fetch deposit address list with network. Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | query parameter: coin (string) | |
| network | No | query parameter: network (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It successfully indicates authentication requirements (USER_DATA) and rate limit costs (Weight(IP): 10), but omits details about pagination, the specific structure of the returned list, or error handling behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from significant redundancy, repeating 'Fetch deposit address list with network' essentially twice separated by an em-dash. The '[DISCOVERY]' prefix appears to be metadata tagging rather than descriptive content, making the structure inefficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description provides the minimum viable context: purpose and rate limits. However, for a 'list' endpoint with 5 parameters, it lacks details about return format, pagination behavior, or what differentiates this from retrieving a single deposit address.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with basic type information (e.g., 'UTC timestamp in ms', 'Signature'). With high schema coverage, the baseline score applies. The description adds no additional parameter semantics beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Fetch') and identifies the resource ('deposit address list with network'), clearly indicating it retrieves multiple addresses. It implicitly distinguishes from the sibling 'get_sapi_v1_capital_deposit_address' (singular) through the 'list' nomenclature and 'with network' qualifier, though it doesn't explicitly contrast the two.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides technical metadata including '(USER_DATA)' indicating authentication requirements and 'Weight(IP): 10' for rate limiting context. However, it lacks explicit guidance on when to use this list endpoint versus the singular 'get_sapi_v1_capital_deposit_address' alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_deposit_hisrecAInspect
Deposit History(supporting network) (USER_DATA) — Fetch deposit history. - Please notice the default startTime and endTime to make sure that time interval is within 0-90 days. - If both startTime and endTime are sent, time between startTime and endTime must be less than 90 days. Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Coin name | |
| limit | No | Default 500; max 1000. | |
| offset | No | query parameter: offset (number) | |
| status | No | * `0` - pending * `6` - credited but cannot withdraw * `1` - success | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses rate limit cost ('Weight(IP): 1') and time window constraints. Without annotations, the description carries the full burden but omits explicit read-only/safety declaration, error behavior for invalid time ranges, and return value structure (no output schema exists to compensate).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with the main action ('Fetch deposit history') followed by specific constraints. The dash-bullet formatting is slightly informal but efficient with no wasted words. All sentences provide actionable constraints or classification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 9 parameters and no output schema, the description adequately covers input constraints but fails to describe the return structure (what fields constitute a deposit record). This gap is significant for a financial data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100%, the description adds critical semantic constraints not in the schema: the 90-day limit for time intervals and the existence of default values for startTime/endTime. This prevents common query errors.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch deposit history' with specific context that this is USER_DATA (authenticated) and supports networks. However, it does not explicitly distinguish from the sibling 'get_sapi_v1_capital_deposit_sub_hisrec' (sub-account deposits), which could cause selection ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific operational constraints (90-day maximum time interval between startTime/endTime) and mentions default time behavior. Lacks explicit guidance on when to choose this over the sub-account deposit history endpoint or prerequisites like API key requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_deposit_sub_addressCInspect
Sub-account Spot Assets Summary (For Master Account) — Fetch sub-account deposit address Weight(IP): 1 Returns: { address: string, coin: string, tag: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Coin name | |
| Yes | Sub-account email | ||
| network | No | query parameter: network (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations or output schema, the description carries the burden of behavioral disclosure. It provides valuable context by documenting the rate limit weight 'Weight(IP): 1' and the return object structure '{ address: string, coin: string, tag: string, ... }'. However, it omits safety profiles (read-only vs destructive) and error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently packed into a single sentence with em-dashes, avoiding verbosity. However, the inclusion of the erroneous 'Sub-account Spot Assets Summary' prefix wastes space and creates confusion, preventing a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description partially compensates by outlining the return object fields. With 6 parameters fully documented in the schema and no annotations, the description provides minimally sufficient context for invocation, though it could benefit from explaining the master/sub-account relationship or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (all 6 parameters documented), establishing a baseline score of 3. The description adds no additional parameter semantics (e.g., valid formats for 'network', how to generate 'signature'), but does not need to given the complete schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description contains a likely copy-paste error, starting with 'Sub-account Spot Assets Summary' (which describes sibling tool get_sapi_v1_sub_account_spot_summary) before contradicting itself with 'Fetch sub-account deposit address.' This confusion undermines the specific verb+resource pairing, making the tool's actual purpose ambiguous despite the clarifying second clause.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like get_sapi_v1_capital_deposit_address (master account) or get_sapi_v1_managed_subaccount_deposit_address. The phrase '(For Master Account)' hints at authorization scope but does not constitute usage guidelines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_deposit_sub_hisrecCInspect
Sub-account Deposit History (For Master Account) — Fetch sub-account deposit history Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Coin name | |
| Yes | Sub-account email | ||
| limit | No | query parameter: limit (number) | |
| offset | No | query parameter: offset (number) | |
| status | No | 0(0:pending,6: credited but cannot withdraw, 1:success) | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden but only discloses the rate limit weight ('Weight(IP): 1'). It fails to clarify that this is a read-only operation, describe pagination behavior for the limit/offset parameters, or explain the response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single line that front-loads the key functionality ('Sub-account Deposit History'). However, appending 'Weight(IP): 1' at the end creates slight awkwardness as this technical detail interrupts the conceptual flow, though the overall length is appropriate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 10 parameters (including pagination and authentication) and no output schema, the description is minimally adequate by specifying the master account context. However, it lacks explanation of return values, pagination behavior, or error conditions that would help an agent invoke the tool correctly without trial and error.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'UTC timestamp in ms', 'Sub-account email'), making additional parameter semantics in the description unnecessary. The description provides no parameter details, meeting the baseline expectation when the schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Fetch' and identifies the resource as 'sub-account deposit history'. It distinguishes from siblings by specifying '(For Master Account)', indicating this is for master accounts querying sub-account data rather than standard deposit history queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like `get_sapi_v1_capital_deposit_hisrec` (master account deposit history) or `get_sapi_v1_capital_withdraw_history`. It omits prerequisites such as master account privileges.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_withdraw_address_listBInspect
[DISCOVERY] Fetch withdraw address list (USER_DATA) — Fetch withdraw address list Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must carry the full burden of behavioral disclosure. It successfully includes the rate limit weight 'Weight(IP): 10' and security classification 'USER_DATA', but lacks details on response format, pagination behavior, or idempotency guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but suffers from redundancy, repeating 'Fetch withdraw address list' essentially twice (once before and once after the em-dash). While the [DISCOVERY] tag and weight metadata are useful, the repetition indicates inefficient information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameter-less read operation without an output schema, the description covers the basic operation and rate limits but fails to describe what data is returned (e.g., saved addresses, network details, whitelisting status), leaving a gap in contextual completeness for an agent expecting to understand the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters according to the input schema, establishing a baseline score of 4. The description correctly does not fabricate or mention any parameters, accurately reflecting the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Fetch withdraw address list' and identifies the endpoint as USER_DATA (requiring API key authentication). It distinguishes from deposit-related siblings by specifying 'withdraw', though it could further clarify the distinction from the withdraw_history endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like get_sapi_v1_capital_withdraw_history (transactions vs addresses) or get_sapi_v1_capital_deposit_address_list. The USER_DATA tag implies authentication requirements but usage prerequisites are not stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_capital_withdraw_historyAInspect
Withdraw History (supporting network) (USER_DATA) — Fetch withdraw history. This endpoint specifically uses per second UID rate limit, user's total second level IP rate limit is 180000/second. Response from the endpoint contains header key X-SAPI-USED-UID-WEIGHT-1S, which defines weight used by the current IP. - network may not be in the response for old withdraw. - Please notice the default startTime and endTime to make sure that time interval is within 0-90 days. - If both startTime and endTime are sent, time between startTime and endTime must be less than 90 days - If withdraw
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Coin name | |
| limit | No | Default 500; max 1000. | |
| offset | No | query parameter: offset (number) | |
| status | No | * `0` - Email Sent * `1` - Cancelled * `2` - Awaiting Approval * `3` - Rejected * `4` - Processing * `5` - Failure * `6` - Completed | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| withdrawOrderId | No | query parameter: withdrawOrderId (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden and succeeds well: it details rate limits (per-second UID weight, IP limits), response headers (X-SAPI-USED-UID-WEIGHT-1S), and data caveats (network field may be missing for old withdrawals).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from a severe truncation (ends mid-sentence '- If withdraw') and redundant phrasing ('Withdraw History... Fetch withdraw history'). The bullet-point structure is inconsistent and the abrupt ending indicates incomplete content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description partially compensates by describing response headers, but the truncation prevents full completeness. It fails to describe the actual response body structure (what fields constitute a withdraw record) despite having 10 input parameters to contextualize.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds critical semantic context about the startTime/endTime parameters—specifically the 90-day constraint logic and default behaviors that pure schema types cannot express.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch withdraw history' and identifies the resource (withdrawals). The '(supporting network)' qualifier hints at specific scope, though it slightly tautologically repeats 'Withdraw History... Fetch withdraw history' and could better distinguish from sibling deposit history endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit temporal constraints (0-90 day window, startTime/endTime rules) that guide correct usage. However, it lacks explicit guidance on when to choose this over sibling alternatives like get_sapi_v1_capital_deposit_hisrec or get_sapi_v1_capital_withdraw_address_list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_convert_asset_infoBInspect
Query order quantity precision per asset (USER_DATA) — Query for supported asset precision information Weight(IP): 100
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses USER_DATA classification (indicating authentication required) and Weight(IP): 100 (rate limit cost), but fails to describe the return format, error conditions, or caching behavior expected from a read-only query endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence front-loaded with the core action. Minor redundancy exists with 'Query... — Query...' structure, but every clause provides distinct information (purpose, classification, rate limit) with no extraneous fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description adequately covers the tool's purpose but lacks information about the response structure or data format returned. For a data-querying tool, some description of return values would be necessary for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (timestamp, signature, recvWindow all documented), establishing a baseline of 3. The description adds no additional parameter context, but given the high schema coverage, no additional compensation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool queries 'order quantity precision per asset' and 'supported asset precision information' with specific verb and resource. However, it does not explicitly differentiate from the sibling get_sapi_v1_convert_exchange_info endpoint, leaving some ambiguity about when to choose this over other convert-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as get_sapi_v1_convert_exchange_info or get_sapi_v1_convert_order_status. There are no prerequisites, conditions, or exclusion criteria mentioned despite having multiple related sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_convert_exchange_infoBInspect
[DISCOVERY] List All Convert Pairs — Query for all convertible token pairs and the tokens’ respective upper/lower limits Weight(IP): 3000
| Name | Required | Description | Default |
|---|---|---|---|
| toAsset | No | User receives coin | |
| fromAsset | No | User spends coin |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable rate-limiting context ('Weight(IP): 3000') and specifies the data scope (upper/lower limits). However, it lacks disclosure of safety profile (read-only vs destructive), caching behavior, or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that is front-loaded with the key action. The metadata tag '[DISCOVERY]' and rate limit weight are appropriately placed, though the latter slightly disrupts readability, it is essential technical information for Binance API usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with 2 optional parameters and no output schema, the description adequately explains what data is returned. However, it lacks explanation of filter logic (how fromAsset/toAsset interact) and omits safety characteristics that would be necessary for a complete behavioral picture without annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('User receives coin', 'User spends coin') and includes examples. The description does not add semantic meaning beyond what the schema already provides, warranting the baseline score of 3 for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('List', 'Query') and resource ('Convert Pairs', 'convertible token pairs'), including specific data returned ('upper/lower limits'). It distinguishes this from transaction-oriented siblings (e.g., post_sapi_v1_convert_accept_quote) by focusing on pair discovery, though it does not explicitly differentiate from get_sapi_v1_convert_asset_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The '[DISCOVERY]' tag implies browsing usage, but there is no explicit guidance on when to use this tool versus other convert-related endpoints, nor does it explain how to use the optional filters (fromAsset/toAsset) or when to leave them empty.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_convert_limit_query_open_ordersCInspect
Query limit open orders (USER_DATA) — Enable users to query for all existing limit orders Weight(UID): 3000 Returns: { list: { quoteId: string, orderId: number, orderStatus: string, fromAsset: string, fromAmount: string, toAsset: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full disclosure burden. It provides the rate limit weight (3000) and an inline return structure schema, which is valuable. However, it lacks disclosure on pagination behavior, maximum results returned, idempotency, or error conditions specific to this endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the action. It efficiently packs the endpoint type (USER_DATA), rate limit weight, and return structure into a single line. While readable, it could benefit from structural separation of concerns (purpose, limits, returns) for better scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The inline documentation of the return structure partially compensates for the lack of a formal output schema. However, the description is incomplete regarding operational details: pagination mechanisms, time range limitations, and how this interacts with the broader convert order lifecycle (placement via post_sapi_v1_convert_limit_place_order).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage and only standard authentication/timing parameters (signature, timestamp, recvWindow), the schema sufficiently documents inputs. The description implies 'get all' semantics by stating 'query for all existing limit orders,' which aligns with the absence of filtering parameters in the schema, but adds no additional parameter guidance beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool queries 'limit open orders' but omits the critical domain context that these are specifically currency conversion (convert) limit orders, not general spot or margin orders. Given siblings like get_api_v3_open_orders (spot) and get_sapi_v1_margin_open_orders, the description should explicitly mention 'convert' or 'currency conversion' to clarify scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives like get_sapi_v1_convert_order_status (which queries specific orders) or get_api_v3_open_orders. No mention of prerequisites, though the USER_DATA parenthetical implies authentication requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_convert_order_statusCInspect
Order status (USER_DATA) — Query order status by order ID. Weight(UID): 100 Returns: { orderId: number, orderStatus: string, fromAsset: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| orderId | No | query parameter: orderId (string) | |
| quoteId | No | query parameter: quoteId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and provides the return structure (JSON fields), rate limit cost (100), and authentication type (USER_DATA). However, it lacks disclosure on idempotency, caching behavior, or error conditions specific to convert orders.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded, efficiently packing the endpoint type, operation, weight, and return format into a single sentence without redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
It provides the return structure inline which compensates for the missing output schema, but lacks domain context about 'convert' orders and does not explain the convert workflow (quote vs. order) that would help an agent understand the parameter choices.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions 'by order ID' which aligns with the orderId parameter, but does not clarify the relationship between orderId and quoteId or when to use each.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it queries order status by order ID, but fails to mention the 'convert' (currency conversion) domain despite this being critical to distinguish it from siblings like get_api_v3_order or get_sapi_v1_margin_order. The scope is underspecified given the crowded sibling space.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternative order query tools, or whether to use orderId versus quoteId parameters. The Weight(UID) info is useful but does not constitute usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_convert_trade_flowAInspect
Get Convert Trade History (USER_DATA) — - The max interval between startTime and endTime is 30 days. Weight(UID): 3000 Returns: { list: { quoteId: string, orderId: number, orderStatus: string, fromAsset: string, fromAmount: string, toAsset: string, ... }[], startTime: number, endTime: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | default 100, max 1000 | |
| endTime | Yes | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | Yes | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant behavioral burden effectively. It discloses rate limiting (3000 weight), temporal constraints (30-day max window), and return structure (list of objects with quoteId, orderId, orderStatus, etc.). Missing explicit read-only declaration or auth requirements, but the USER_DATA tag and signature parameter imply authenticated access.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense but poorly formatted. The '— -' punctuation error and lack of spacing before 'Returns:' creates visual clutter. The JSON-like return structure description, while valuable, is crammed into a single text blob. Every sentence earns its place content-wise, but structural presentation hinders readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by describing return structure (list array with trade objects). However, omits pagination behavior (how to use limit/recvWindow effectively), doesn't explain the USER_DATA security classification implications, and lacks error scenario documentation for invalid date ranges.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions (UTC timestamps, default/max limits). The description adds value by emphasizing the 30-day relationship between startTime and endTime, but doesn't elaborate on parameter interactions (e.g., that timestamp must be current time for signature) or provide format examples beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'Get Convert Trade History' with USER_DATA classification, clearly identifying the resource (convert trades) and operation (retrieval). Distinguishes from siblings like get_sapi_v1_convert_order_status by emphasizing 'History' vs single order queries, though it doesn't explicitly clarify when to prefer this over querying specific order IDs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical constraint 'max interval between startTime and endTime is 30 days' and rate limit 'Weight(UID): 3000'. However, lacks explicit when-to-use guidance compared to alternatives like get_sapi_v1_convert_order_status or get_sapi_v1_convert_limit_query_open_orders, and omits prerequisites like required API key permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_copy_trading_futures_lead_symbolCInspect
Get Futures Lead Trading Symbol Whitelist(USER_DATA) — Get Futures Lead Trading Symbol Whitelist Weight(IP): 20 Returns: { code: string, message: string, data: { symbol: string, baseAsset: string, quoteAsset: string } }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit weight (Weight(IP): 20) and provides an inline return structure schema, which compensates for the lack of output_schema. However, it fails to explain the USER_DATA permission requirements, error scenarios, or whether the whitelist is static or dynamic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense sentence with poor structure, appearing to concatenate a title, endpoint description, weight metadata, and return type. The repetition of 'Get Futures Lead Trading Symbol Whitelist' and lack of punctuation make it hard to parse, though it is not wordy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the inline documentation of return fields (code, message, data structure) is necessary and present. However, the description lacks context on what constitutes a 'lead symbol' or how the whitelist is used in copy trading workflows, leaving domain gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow all documented), establishing a baseline of 3. The description adds no additional parameter context (e.g., that signature must be HMAC-SHA256, or typical recvWindow values), so it meets but does not exceed the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the specific resource (Futures Lead Trading Symbol Whitelist) and action (Get), distinguishing it from sibling copy trading tools like `get_sapi_v1_copy_trading_futures_user_status` by focusing on symbols rather than user status. However, the term 'Lead Trading' assumes domain knowledge about copy trading mechanics, and the formatting is garbled with repetition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus other copy trading endpoints, prerequisites for accessing the whitelist, or conditions under which it should not be called. The USER_DATA tag implies authentication requirements but is not explained.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_copy_trading_futures_user_statusBInspect
Get Futures Lead Trader Status(TRADE) — Get Futures Lead Trader Status Weight(UID): 20 Returns: { code: string, message: string, data: { isLeadTrader: boolean, time: number }, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return structure (isLeadTrader boolean, timestamp) and rate limit weight (20), which is valuable behavioral context. However, it omits error conditions, caching behavior, and the specific meaning of the TRADE designation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence format efficiently packs the purpose, rate limit, and return type without redundancy. The repetition of 'Get Futures Lead Trader Status' is slightly awkward, but every clause serves a distinct purpose (categorization, weight, response format).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the inline return structure documentation partially compensates. However, for a specialized futures/copy-trading endpoint, the description lacks domain context about what constitutes a 'Lead Trader' and when this status check is necessary in a workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow are all documented), so the description appropriately relies on the schema. The mention of 'Weight(UID): 20' subtly implies user-scoped rate limiting but doesn't add substantial semantic meaning beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get') and resource ('Futures Lead Trader Status'), distinguishing it from the sibling tool 'get_sapi_v1_copy_trading_futures_lead_symbol' which handles symbols rather than user status. The (TRADE) tag indicates the API category, though it assumes familiarity with 'Lead Trader' terminology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus other copy trading endpoints, nor are prerequisites mentioned. The 'Weight(UID): 20' hints at rate limit costs but doesn't explain the conditions under which this query should be preferred over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_dci_product_accountsBInspect
Check Dual Investment accounts(USER_DATA) — Check Dual Investment accounts Weight(IP): 1 Returns: { totalAmountInBTC: string, totalAmountInUSDT: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden effectively. It provides specific rate limiting information ('Weight(IP): 1') and documents the return structure ({ totalAmountInBTC, totalAmountInUSDT }), which compensates for the lack of a formal output schema. It also flags USER_DATA, indicating authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from immediate repetition ('Check Dual Investment accounts... — Check Dual Investment accounts'). The structure crams technical metadata (USER_DATA, Weight(IP), Returns) into a single dense line with inconsistent formatting, reducing readability despite the brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only account retrieval tool with three simple parameters, the description is adequately complete. The inline return value documentation substitutes effectively for a missing output schema, and the rate limit weight is disclosed. Missing only error handling or specific authentication method details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all three parameters (signature, timestamp, recvWindow). The description does not add parameter-specific semantics, but given the high schema coverage, this meets the baseline expectation without penalty.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the specific resource (Dual Investment accounts) and access type (USER_DATA), distinguishing it from sibling DCI tools like get_sapi_v1_dci_product_list and get_sapi_v1_dci_product_positions. However, 'Check' is slightly vague (could imply verification vs retrieval), and the phrase 'Dual Investment accounts' is unnecessarily repeated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the related get_sapi_v1_dci_product_positions or get_sapi_v1_dci_product_list endpoints. No prerequisites or authentication requirements are mentioned beyond the implicit USER_DATA tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_dci_product_listBInspect
[DISCOVERY] Get Dual Investment product list(USER_DATA) — Get Dual Investment product list Weight(IP): 1 Returns: { total: number, list: { id: string, investCoin: string, exercisedCoin: string, strikePrice: string, duration: number, settleDate: number, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| pageSize | No | MIN 1, MAX 100; Default 100 | |
| pageIndex | No | Page number, default is first page, start form 1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| investCoin | Yes | Asset used for subscribing, e.g.: if you subscribe to a high sell product (call option), you should input: - optionType: CALL, - exercisedCoin: USDT, - investCoin: BNB; if you subscribe to a low buy product (put option), you should input: - optionType: PUT, - exercisedCoin: BNB, - investCoin: USDT; | |
| optionType | Yes | Input CALL or PUT | |
| recvWindow | No | The value cannot be greater than 60000 | |
| exercisedCoin | Yes | Target exercised asset, e.g.: if you subscribe to a high sell product (call option), you should input: - optionType: CALL, - exercisedCoin: USDT, - investCoin: BNB; if you subscribe to a low buy product (put option), you should input: - optionType: PUT, - exercisedCoin: BNB, - investCoin: USDT; |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully documents the rate limit ('Weight(IP): 1') and provides an inline output schema structure ('Returns: {total, list...}'), which compensates for the lack of formal output schema. However, it omits auth flow details, caching behavior, and pagination limits beyond the parameter definitions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but somewhat repetitive ('Get Dual Investment product list' appears twice). The inline return type documentation is dense but necessary given the lack of output schema. It is front-loaded with the action but could separate metadata (Weight, Returns) more cleanly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial product listing tool with 8 parameters, the description is minimally complete. It leverages the comprehensive schema and provides return type documentation. However, it lacks explanation of what 'Dual Investment' products are, how pagination behaves (beyond param names), or error scenarios that an agent might encounter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with detailed explanations of parameter relationships (e.g., the CALL/PUT logic for investCoin/exercisedCoin). Since the schema fully documents all 8 parameters including constraints, the description does not need to add param semantics, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get') and resource ('Dual Investment product list'). While it restates the tool name, it specifies the domain concept 'Dual Investment' and distinguishes this from sibling tools like get_sapi_v1_dci_product_positions and get_sapi_v1_dci_product_accounts by focusing on the 'product list' resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus related DCI tools (e.g., when to query product list vs. user positions). The '(USER_DATA)' tag implies authentication requirements but does not explain prerequisites or filtering strategies for the list query.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_dci_product_positionsCInspect
Get Dual Investment positions(USER_DATA) — Get Dual Investment positions (batch) Weight(IP): 1 Returns: { total: number, list: { id: string, investCoin: string, exercisedCoin: string, subscriptionAmount: string, strikePrice: string, duration: number, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | - PENDING: Products are purchasing, will give results later; - PURCHASE_SUCCESS: purchase successfully; - SETTLED: Products are finish settling; - PURCHASE_FAIL: fail to purchase; - REFUNDING: refund ongoing; - REFUND_SUCCESS: refund to spot account successfully; - SETTLING: Products are settling. If don't fill this field, will response all the position status. | |
| pageSize | No | MIN 1, MAX 100; Default 100 | |
| pageIndex | No | Page number, default is first page, start form 1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit weight (Weight(IP): 1) and return structure via inline JSON. However, it lacks disclosure of idempotency, error handling behavior, or data freshness guarantees expected for a financial positions endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is inefficiently structured with redundant phrasing ('Get Dual Investment positions' appears twice) and crams technical metadata (USER_DATA, Weight(IP), Returns) into a single run-on sentence. The return value JSON is appended without formatting, reducing readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description partially compensates by including the return value structure inline, which is necessary for agent usage. However, the response structure is truncated ('...') and lacks explanation of field semantics, leaving gaps in contextual completeness for a 6-parameter financial tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed explanations for all 6 parameters including status enum values and pagination controls. The description adds no parameter-specific guidance, but with complete schema coverage, the baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (Dual Investment positions), and includes the USER_DATA tag indicating it requires authentication. It distinguishes from sibling tools like get_sapi_v1_dci_product_list by specifying 'positions' rather than available products, and notes it retrieves data in batch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus sibling tools like get_sapi_v1_dci_product_accounts or post_sapi_v1_dci_product_subscribe. No mention of prerequisites (though USER_DATA hints at auth requirements) or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_eth_staking_eth_history_rate_historyAInspect
Get WBETH Rate History (USER_DATA) — - The time between startTime and endTime cannot be longer than 3 months. - If startTime and endTime are both not sent, then the last 30 days' data will be returned. - If startTime is sent but endTime is not sent, the next 30 days' data beginning from startTime will be returned. - If endTime is sent but startTime is not sent, the 30 days' data before endTime will be returned. Weight(IP): 150 Returns: { rows: { annualPercentageRate: string, exchangeRate: string, time: number }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses rate limit weight (150), authentication requirement (implied by USER_DATA tag and required signature parameter), and return structure with field types (annualPercentageRate, exchangeRate, etc.). Does not explicitly state 'read-only' or side effects, though 'Get' implies safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with the action and resource. Every clause serves a purpose: time constraints, default behaviors, rate limits, and return format. Minor formatting awkwardness with '— -' bullet transitions, but information density is high with no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 7-parameter endpoint. Compensates for missing output schema by documenting the return structure inline (rows array with specific fields). Includes rate limiting and time window constraints necessary for successful invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100%, the description adds crucial semantic constraints: the 3-month limit between timestamps and the detailed default behaviors (30-day windows) when parameters are omitted. This contextual logic is not present in the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Get WBETH Rate History (USER_DATA)', providing a specific verb and resource. It clearly distinguishes from siblings like redemption_history, rewards_history, and staking_history by specifying 'Rate History' and the WBETH asset type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit parameter usage rules: the 3-month max window constraint and four different scenarios for startTime/endTime combinations (both missing, only start, only end). Lacks explicit comparison to sibling tools (e.g., 'use this instead of wbeth_rewards_history for rate data'), but the parameter guidance is thorough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_eth_staking_eth_history_redemption_historyAInspect
Get ETH redemption history (USER_DATA) — - The time between startTime and endTime cannot be longer than 3 months. - If startTime and endTime are both not sent, then the last 30 days' data will be returned. - If startTime is sent but endTime is not sent, the next 30 days' data beginning from startTime will be returned. - If endTime is sent but startTime is not sent, the 30 days' data before endTime will be returned. Weight(IP): 150 Returns: { rows: { time: number, arrivalTime: number, asset: string, amount: string, status: string, distributeAsset: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting ('Weight(IP): 150'), time constraints, and crucially provides the complete return structure since no output schema exists. It could improve by mentioning error cases or cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the purpose first, but the formatting (using '—' followed by '-' bullets) is cramped and could be clearer. Every sentence earns its place, though the structure could be better organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of output schema and annotations, the description adequately compensates by documenting return fields, rate limits, and temporal constraints. For a 7-parameter history query tool, it provides sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage documenting individual fields, the description adds significant value by explaining parameter interactions—specifically the four distinct behaviors depending on startTime/endTime presence/absence and their 30-day default windows, which is not inferable from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('ETH redemption history'), clearly distinguishing it from siblings like get_sapi_v1_eth_staking_eth_history_staking_history or rewards_history by specifying the exact history type (redemption). The '(USER_DATA)' tag appropriately signals authentication requirements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides detailed implicit usage guidance through time-window logic (3-month max, default 30-day behavior, parameter interaction rules), but lacks explicit guidance on when to choose this tool over alternative ETH history endpoints like staking_history or wbeth_rewards_history.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_eth_staking_eth_history_rewards_historyAInspect
Get BETH rewards distribution history(USER_DATA) — - The time between startTime and endTime cannot be longer than 3 months. - If startTime and endTime are both not sent, then the last 30 days' data will be returned. - If startTime is sent but endTime is not sent, the next 30 days' data beginning from startTime will be returned. - If endTime is sent but startTime is not sent, the 30 days' data before endTime will be returned. Weight(IP): 150 Returns: { rows: { time: number, asset: string, holding: string, amount: string, annualPercentageRate: string, status: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit cost ('Weight(IP): 150'), indicates authentication requirements via '(USER_DATA)', and describes the complete return structure including nested fields (rows array with time, asset, holding, etc.) since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the purpose, but uses inline dashes instead of clear formatting for the four time-range rules. The return value specification is verbose but necessary given the lack of output schema. All content is relevant but could be structured more clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately compensates by documenting the return structure (rows array schema and total count), rate limiting, and time constraints. It covers all critical aspects needed for an agent to invoke the tool correctly, though explicit authentication requirements could be clearer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage providing baseline documentation, the description adds crucial semantic context about parameter interactions: the 3-month maximum window constraint and the four different time-range behaviors based on startTime/endTime presence. This business logic is essential for correct usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get BETH rewards distribution history' with the USER_DATA scope tag, specifying the exact resource. It distinguishes from siblings like get_sapi_v1_eth_staking_eth_history_wbeth_rewards_history and get_sapi_v1_eth_staking_eth_history_staking_history by specifically targeting BETH rewards rather than WBETH rewards or staking transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides detailed logic for time parameter usage (3-month max window, defaults when params omitted), which guides correct invocation. However, it lacks explicit guidance on when to choose this tool over sibling history tools (e.g., redemption history vs rewards history) or prerequisites like required authentication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_eth_staking_eth_history_staking_historyAInspect
Get ETH staking history (USER_DATA) — - The time between startTime and endTime cannot be longer than 3 months. - If startTime and endTime are both not sent, then the last 30 days' data will be returned. - If startTime is sent but endTime is not sent, the next 30 days' data beginning from startTime will be returned. - If endTime is sent but startTime is not sent, the 30 days' data before endTime will be returned. Weight(IP): 150 Returns: { rows: { time: number, asset: string, amount: string, status: string, distributeAmount: string, conversionRatio: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limiting ('Weight(IP): 150') and provides an inline JSON structure of the return payload including field types (time, asset, amount, status, etc.). It implies authentication via 'USER_DATA' but does not explicitly state the read-only nature of the operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but poorly structured. It uses dash-prefixed clauses without clear separation, and crams the return type definition into a single inline JSON block at the end. The key constraints are present but require parsing through a run-on sentence structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no structured output schema and no annotations, the description compensates by inline-documenting the return structure (rows array with field types and total count) and documenting rate limits. For a 7-parameter endpoint with complex time constraints, it covers the essential behavioral and return value information adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds crucial semantic constraints not present in the schema: the 3-month maximum range between startTime and endTime, and the four specific behaviors when time parameters are omitted or partially provided. This significantly aids correct parameter configuration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear action ('Get') and resource ('ETH staking history') with authentication scope ('USER_DATA'). However, it does not explicitly differentiate from similar sibling endpoints like 'rewards_history' or 'redemption_history', leaving the specific nature of 'staking history' (deposit events vs. reward distributions) somewhat implicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent parameter-specific guidance for time range logic (3-month limits, default 30-day windows) and pagination. However, it lacks explicit guidance on when to select this tool over the five other ETH staking history siblings (rate, redemption, rewards, etc.), forcing the agent to infer from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_eth_staking_eth_history_wbeth_rewards_historyAInspect
Get WBETH rewards history(USER_DATA) — - The time between startTime and endTime cannot be longer than 3 months. - If startTime and endTime are both not sent, then the last 30 days' data will be returned. - If startTime is sent but endTime is not sent, the next 30 days' data beginning from startTime will be returned. - If endTime is sent but startTime is not sent, the 30 days' data before endTime will be returned. Weight(IP): 150 Returns: { estRewardsInETH: string, rows: { time: number, amountInETH: string, holding: string, holdingInETH: string, annualPercentageRate: string }[], total: number }
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Includes rate limit weight (150) and detailed return structure, but fails to explicitly declare the read-only/safe nature of the operation or error conditions. The USER_DATA tag hints at authentication needs but isn't explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with the purpose first, but the formatting is dense with dash-separated bullet points in a single paragraph and a large inline JSON blob for returns. All content is relevant but could be structured more clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong completeness given no formal output schema exists. The description embeds the full return structure (estRewardsInETH, rows array fields, total) and pagination constraints. Would benefit from explicit mention of authentication requirements implied by 'signature' parameter and USER_DATA tag.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with basic descriptions. The description adds significant value by explaining the interaction logic between startTime/endTime (3-month constraint, default 30-day behaviors) that raw schema descriptions don't convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get WBETH rewards history' with specific resource identification. The 'WBETH' specification distinguishes it from the sibling 'get_sapi_v1_eth_staking_eth_history_rewards_history' (ETH rewards), though it could explicitly clarify that WBETH refers to wrapped ETH staking rewards.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides excellent detailed logic for time parameter usage (3-month max, defaults when parameters omitted). However, lacks explicit guidance on when to choose this tool over the sibling ETH rewards history endpoint or prerequisites like authentication requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_eth_staking_eth_quotaAInspect
Get current ETH staking quota (USER_DATA) — Weight(IP): 150 Returns: { leftStakingPersonalQuota: string, leftRedemptionPersonalQuota: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant weight by disclosing the rate limit 'Weight(IP): 150' and the exact return structure with field names and types. This provides crucial behavioral context about API cost and response format that would otherwise be unknown.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action and resource, followed by metadata (USER_DATA, Weight) and return structure. Every element serves a purpose with no redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description fully compensates by explicitly documenting the return object structure. Combined with the rate limit weight and clear purpose, it provides complete information needed to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow all documented), establishing a baseline of 3. The tool description does not add parameter semantics beyond the schema, but none are needed given the comprehensive schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get', the resource 'current ETH staking quota', and the data classification '(USER_DATA)'. It specifically distinguishes from sibling history endpoints by emphasizing 'current' quota rather than historical data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the return structure (leftStakingPersonalQuota, leftRedemptionPersonalQuota), indicating this checks remaining limits. However, it lacks explicit guidance on when to use this versus the history endpoints or staking actions, or prerequisites like requiring authentication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_eth_staking_wbeth_history_unwrap_historyAInspect
Get WBETH unwrap history (USER_DATA) — - The time between startTime and endTime cannot be longer than 3 months. - If startTime and endTime are both not sent, then the last 30 days' data will be returned. - If startTime is sent but endTime is not sent, the next 30 days' data beginning from startTime will be returned. - If endTime is sent but startTime is not sent, the 30 days' data before endTime will be returned. Weight(IP): 150 Returns: { rows: { time: number, fromAsset: string, fromAmount: string, toAsset: string, toAmount: string, exchangeRate: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses rate limiting ('Weight(IP): 150'), return structure (rows array with field types, total count), and time window constraints. Missing explicit auth requirement statement, though implied by USER_DATA tag and signature parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense but well-structured with dash-separated rules. Inline return schema is necessary given lack of output schema field. Every clause provides value (purpose, constraints, rate limit, return type). Slightly verbose but justified by completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage for a history query tool: includes rate limits, return structure (compensating for missing output schema), pagination logic via time windows, and temporal constraints. Auth requirements inferable from signature parameter and USER_DATA tag.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, establishing baseline 3. Description adds crucial business logic for startTime/endTime (3-month limit, default behaviors) beyond schema types. Does not redundantly describe size/current/signature which are well-documented in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Get' and resource 'WBETH unwrap history', with '(USER_DATA)' clarifying scope. Clearly distinguishes from sibling 'wrap_history' and other ETH staking history endpoints via the 'unwrap' specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal query rules (3-month max window, default 30-day behaviors when parameters are omitted) that govern how to query effectively. Lacks explicit 'use this instead of X' sibling differentiation, but the detailed constraints provide strong usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_eth_staking_wbeth_history_wrap_historyAInspect
Get WBETH wrap history (USER_DATA) — - The time between startTime and endTime cannot be longer than 3 months. - If startTime and endTime are both not sent, then the last 30 days' data will be returned. - If startTime is sent but endTime is not sent, the next 30 days' data beginning from startTime will be returned. - If endTime is sent but startTime is not sent, the 30 days' data before endTime will be returned. Weight(IP): 150 Returns: { rows: { time: number, fromAsset: string, fromAmount: string, toAsset: string, toAmount: string, exchangeRate: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden well by disclosing 'Weight(IP): 150' for rate limiting, 'USER_DATA' indicating authenticated access, and inline documentation of the return structure including rows array and total count. It misses explicit read-only/safety declarations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement upfront, followed by bullet-pointed temporal constraints, and ending with return type documentation. The inline JSON return format is dense but necessary given the lack of structured output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a history query tool with 7 parameters, the description adequately covers time constraints, pagination defaults, rate limit costs, and return structure via inline documentation. It compensates well for the absence of a structured output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds significant value by detailing the four specific temporal query behaviors and default windows that govern startTime/endTime interactions, which schema field descriptions alone do not convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get WBETH wrap history' with the USER_DATA classification, precisely defining the resource accessed. It effectively distinguishes from siblings like 'unwrap_history' and other ETH staking history variants through the specific 'wrap' terminology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides detailed temporal logic for startTime/endTime parameters (3-month max window, 30-day defaults), but does not explicitly guide when to use this tool versus sibling alternatives like unwrap_history or other staking history endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_fiat_ordersAInspect
Fiat Deposit/Withdraw History (USER_DATA) — - If beginTime and endTime are not sent, the recent 30-day data will be returned. Weight(UID): 90000 Returns: { code: string, message: string, data: { orderNo: string, fiatCurrency: string, indicatedAmount: string, amount: string, totalFee: string, method: string, ... }[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| rows | No | Default 100, max 500 | |
| endTime | No | UTC timestamp in ms | |
| beginTime | No | query parameter: beginTime (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| transactionType | Yes | * `0` - deposit * `1` - withdraw |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant burden. It successfully discloses the rate limit 'Weight(UID): 90000', the authentication requirement '(USER_DATA)', and manually documents the return structure including specific fields (orderNo, fiatCurrency, indicatedAmount, etc.). It does not, however, explicitly confirm read-only safety or detail error handling scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the operation type, but suffers from formatting issues (awkward em-dash and hyphen combination '— -') and packs multiple concepts (purpose, parameters, weight, returns) into a dense block. While no words are wasted, the structural clarity could be improved with better separation of concerns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and output schema, the description compensates effectively by manually specifying the return object structure with field details. It covers authentication type, rate limiting, pagination defaults (via date parameter explanation), and response format, providing sufficient context for an agent to invoke the tool despite missing structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline where the schema documents parameters adequately. The description adds value by explaining the default time range behavior (30 days) when beginTime/endTime are omitted, but does not elaborate on parameter relationships or provide additional semantic context beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as retrieving 'Fiat Deposit/Withdraw History' with authentication level '(USER_DATA)', specifying the exact resource (fiat orders) and operation (history retrieval). However, it does not explicitly distinguish from the sibling 'get_sapi_v1_fiat_payments' endpoint, leaving potential ambiguity about which fiat history tool to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides specific behavioral guidance regarding the date parameters: 'If beginTime and endTime are not sent, the recent 30-day data will be returned.' This helps agents understand default behavior. However, it lacks explicit guidance on when to choose this tool over alternatives like fiat_payments or capital deposit/withdraw history endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_fiat_paymentsAInspect
Fiat Payments History (USER_DATA) — - If beginTime and endTime are not sent, the recent 30-day data will be returned. Weight(IP): 1 Returns: { code: string, message: string, data: { orderNo: string, sourceAmount: string, fiatCurrency: string, obtainAmount: string, cryptoCurrency: string, totalFee: string, ... }[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| rows | No | Default 100, max 500 | |
| endTime | No | UTC timestamp in ms | |
| beginTime | No | query parameter: beginTime (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| transactionType | Yes | * `0` - deposit * `1` - withdraw |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden effectively. It includes rate limiting information ('Weight(IP): 1'), default time-range behavior, and provides the complete return structure inline since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but suffers from formatting issues ('— -' construction) and dense packing of the return schema. While no sentences are wasted, the structural presentation could be clearer for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters with full schema coverage and no annotations or output schema, the description adequately compensates by documenting the rate limit, default temporal behavior, and return object structure. This provides sufficient context for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds value by explaining the interaction between beginTime and endTime parameters (default 30-day behavior when omitted), but does not elaborate on other parameters like transactionType semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as retrieving 'Fiat Payments History' with USER_DATA scope, using specific verbs and resource names. However, it does not explicitly differentiate from the sibling tool 'get_sapi_v1_fiat_orders', which could cause confusion between payments and orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides temporal usage guidance ('If beginTime and endTime are not sent, the recent 30-day data will be returned'), but lacks explicit guidance on when to use this tool versus alternatives like get_sapi_v1_fiat_orders or prerequisites for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_futures_hist_data_linkAInspect
Get Future TickLevel Orderbook Historical Data Download Link (USER_DATA) — Weight(IP): 1 Returns: { data: { day: string, url: string }[] }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | query parameter: symbol (string) | |
| endTime | No | UTC timestamp in ms | |
| dataType | Yes | query parameter: dataType ("T_DEPTH" | "S_DEPTH") | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses rate limiting ('Weight(IP): 1'), authentication context ('USER_DATA'), and return structure ('{ data: { day: string, url: string }[] }'), but lacks details on link expiration, idempotency, or error scenarios that would be necessary for a higher score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded, delivering the purpose, return type, weight, and prerequisites in two dense sentences without redundant explanation of parameters already documented in the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by documenting the return structure. It covers prerequisites and rate limiting adequately for a complex authenticated endpoint, though it could improve by mentioning link expiration or common error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage including enum values for dataType ('T_DEPTH' | 'S_DEPTH') and required flags, the schema sufficiently documents parameters. The description adds no supplementary parameter semantics, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool retrieves 'Future TickLevel Orderbook Historical Data Download Link' with specific asset class (Futures) and granularity (TickLevel). The '(USER_DATA)' tag clarifies authentication requirements, distinguishing it from public market data siblings like get_api_v3_depth.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The PREREQUISITE section explicitly instructs users to 'First call exchange info or symbol listing to discover valid trading pairs' before using this tool, providing clear workflow guidance and naming specific prerequisite tools to invoke first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_futures_transferAInspect
Get Future Account Transaction History List (USER_DATA) — Weight(IP): 10 Returns: { rows: { asset: string, tranId: number, amount: string, type: string, timestamp: number, status: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | Yes | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | Yes | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant weight by disclosing the rate limit 'Weight(IP): 10' and explicitly documenting the return structure (rows array with asset, tranId, amount, type, timestamp, status fields) to compensate for the missing output schema. Lacks explicit mention of required authentication or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely dense and front-loaded single sentence containing the purpose, rate limiting weight, and complete return type definition. No redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately documents the return structure inline. All 8 parameters are documented in the schema. Could be improved by noting pagination behavior (implied by 'current' and 'size' parameters) or data retention limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'Default:10 Max:100' for size, 'UTC timestamp in ms' for time fields), so the description does not need to repeat parameter details. However, the description adds no semantic context beyond the schema, focusing instead on the return structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get Future Account Transaction History List' with the specific scope '(USER_DATA)', distinguishing it from the sibling POST endpoint (post_sapi_v1_futures_transfer) by specifying this retrieves history rather than executing transfers. The verb 'Get' and resource 'Transaction History List' are explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus alternative history endpoints (e.g., get_sapi_v1_asset_transfer, get_sapi_v1_margin_transfer) or versus the POST endpoint for executing transfers. The 'USER_DATA' tag implies authentication requirements but doesn't state them clearly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_giftcard_buy_code_token_limitBInspect
Fetch Token Limit (USER_DATA) — This API is to help you verify which tokens are available for you to purchase fixed-value gift cards as mentioned in section 2 and it's limitation. Weight(IP): 1 Returns: { code: string, message: string, data: { coin: string, fromMin: string, fromMax: string }, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| baseToken | Yes | The token you want to pay, example BUSD | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context by specifying 'Weight(IP): 1' for rate limiting and detailing the return structure `{ code: string, message: string, data: { coin: string, fromMin: string, fromMax: string }, ... }`. However, it does not explicitly state that this is a safe, read-only operation or disclose error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is efficient, but it includes a wasteful external reference ('as mentioned in section 2') and a grammatical error ('it's limitation' instead of 'its limitation'). While the return value specification is useful, the 'section 2' reference adds no value in this context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter query tool without structured output schema, the description adequately compensates by detailing the return format including the nested data object with `coin`, `fromMin`, and `fromMax` fields. It could be improved by explicitly mentioning authentication requirements or rate limit implications beyond the 'Weight(IP)' notation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., baseToken is described as 'The token you want to pay, example BUSD'), so the description does not need to add parameter semantics. It meets the baseline expectation for a well-documented schema without adding redundant or clarifying information about parameter interactions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Fetch[es] Token Limit' to 'verify which tokens are available for you to purchase fixed-value gift cards,' specifying the exact resource and action. However, it references 'section 2' without context and fails to explicitly distinguish this limit-checking tool from the sibling purchase tool `post_sapi_v1_giftcard_buy_code`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives, nor does it state prerequisites like required authentication (beyond the cryptic 'USER_DATA' label) or that it should be called before purchasing. It only implies usage by mentioning 'verify... availability.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_giftcard_cryptography_rsa_public_keyAInspect
Fetch RSA Public Key (USER_DATA) — This API is for fetching the RSA Public Key. This RSA Public key will be used to encrypt the card code. Please note that the RSA Public key fetched is valid only for the current day. Weight(IP): 1 Returns: { code: string, message: string, data: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully adds critical behavioral context not inferable from the schema: the 1-day expiration constraint, the IP weight rate limit (1), and the return object structure (code, message, data). It does not contradict any implied safety properties.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the action and classification (USER_DATA). It contains minor redundancy ('Fetch... This API is for fetching'), but efficiently packs the expiration rule, rate limit, and return format into a single sentence each.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description adequately compensates by documenting the return structure and the daily validity limitation. For a 3-parameter cryptographic utility, it provides sufficient context for correct invocation and result handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all 3 parameters (signature, timestamp, recvWindow). The description does not redundantly explain these parameters, which is appropriate given the schema completeness. It meets the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Fetch), resource (RSA Public Key), and its exact purpose (encrypt the card code). While it effectively distinguishes this cryptography tool from sibling giftcard tools like 'verify' or 'redeem' by specifying the encryption use case, it does not explicitly name sibling alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides important usage context that the fetched key is only valid for the current day, which governs caching behavior. It also clarifies the encryption purpose. However, it lacks explicit workflow guidance (e.g., 'use this before calling create_code') or conditions when not to use this endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_giftcard_verifyAInspect
Verify a Binance Code (USER_DATA) — This API is for verifying whether the Binance Code is valid or not by entering Binance Code or reference number. Please note that if you enter the wrong binance code 5 times within an hour, you will no longer be able to verify any binance code for that hour. Weight(IP): 1 Returns: { code: string, message: string, data: { valid: boolean, token: string, amount: string }, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| referenceNo | Yes | reference number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden and succeeds. It documents rate limiting (Weight(IP): 1), authentication requirements (USER_DATA tag, signature parameter), and provides complete return value structure including the data.valid boolean and nested fields despite no formal output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense but well-structured with clear segmentation: purpose statement, rate limit warning, weight documentation, and return schema. The return value JSON is slightly verbose but necessary given the absence of a formal output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a security-sensitive verification endpoint. Covers authentication (USER_DATA, signature requirement), rate limiting consequences, and response format. The only minor gap is not explicitly stating this is a read-only check vs. the destructive 'redeem' operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100% (baseline 3), the description adds valuable semantic clarification that the 'referenceNo' parameter accepts either a 'Binance Code' or 'reference number', which is not obvious from the parameter name or schema description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Verify' with the resource 'Binance Code', clearly distinguishing it from sibling operations like post_sapi_v1_giftcard_create_code, post_sapi_v1_giftcard_buy_code, and post_sapi_v1_giftcard_redeem_code. It explicitly states the validation purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage constraints: entering wrong codes 5 times within an hour triggers a 1-hour verification lockout. While it doesn't explicitly contrast with 'redeem' use cases, the rate limit warning effectively guides the agent on careful usage patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_all_assetBInspect
Query all source asset and target asset (USER_DATA) — Query all source assets and target assets Weight(IP): 1 Returns: { targetAssets: string[], sourceAssets: string[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses USER_DATA (indicating API key authentication required), Weight(IP): 1 (rate limiting), and the return structure. However, it omits safety properties (idempotency, read-only nature) and error conditions that would help an agent understand runtime behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description contains redundant phrasing ('Query all source asset...' appears twice with minor variations). While it efficiently packs rate limit and return value information, the repetition and dense technical formatting (Weight(IP), Returns JSON) reduce readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter query tool with 100% schema coverage and no output schema, the description adequately compensates by providing the return structure inline. However, it could better explain the relationship between source/target assets in the auto-invest context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all 3 parameters (timestamp, signature, recvWindow). The description adds no additional parameter semantics beyond the schema, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it queries both source and target assets for auto-invest (USER_DATA), distinguishing it from sibling tools like 'get_sapi_v1_lending_auto_invest_source_asset_list' and 'get_sapi_v1_lending_auto_invest_target_asset_list' which likely return only one type. However, it lacks context about what 'auto invest' means in this domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no explicit guidance on when to use this combined endpoint versus the separate source_asset_list or target_asset_list endpoints. While the scope is implied by 'all', the agent cannot determine if this is preferred over calling the individual endpoints separately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_history_listCInspect
[DISCOVERY] Query subscription transaction history — Query subscription transaction history of a plan Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| planId | No | query parameter: planId (number) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| planType | No | query parameter: planType ("SINGLE" | "PORTFOLIO" | "INDEX" | "ALL") | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| targetAsset | No | query parameter: targetAsset (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It only discloses rate limiting ('Weight(IP): 1') but fails to mention pagination behavior (despite 'current' and 'size' parameters), response format, or that this requires authentication (implied only by 'signature' parameter).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (one sentence), but contains metadata artifacts ('[DISCOVERY]' tag, 'Weight(IP): 1') that appear to be API documentation leakage rather than intentional description content. Structure is fragmented with em-dash separation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters and no output schema, the description is insufficient. It lacks explanation of what 'subscription' means in this context (purchasing cycles?), how pagination interacts with time filters, or what data is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, meeting the baseline. However, the description adds no parameter context beyond the schema. Schema descriptions like 'query parameter: planId (number)' are tautological, but technically present.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource as 'subscription transaction history' for auto-invest plans, using specific verbs ('Query'). It distinguishes from sibling tools like get_sapi_v1_lending_auto_invest_redeem_history and get_sapi_v1_lending_auto_invest_rebalance_history by specifying 'subscription' transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus other history endpoints or plan management tools. No mention of prerequisites like needing an active planId or time range constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_index_infoCInspect
Query Index Details(USER_DATA) — Query index details Weight(IP): 1 Returns: { indexId: number, indexName: string, status: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| indexId | Yes | query parameter: indexId (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully indicates this is a USER_DATA (authenticated) endpoint and provides the rate limit Weight(IP): 1. It also hints at the return structure with example fields (indexId, indexName, status), which is valuable given no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense line that efficiently packs the operation type, authentication class, rate limit, and return structure. While it contains minor repetition ('Query index details' appears twice), there is no significant waste and critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately compensates by providing return field examples and rate limit information. However, it misses the opportunity to explain what constitutes an 'index' in the auto-invest context or mention error handling patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description does not add semantic meaning to the parameters (e.g., what the indexId represents or recvWindow behavior), but it also doesn't need to compensate for schema gaps. It focuses on return values instead of input semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Query Index Details(USER_DATA)' which identifies the verb and resource, but redundantly repeats 'Query index details' immediately after, bordering on tautology. It doesn't clarify that these are auto-invest specific indices, though this is implicit in the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus the numerous sibling auto-invest tools (e.g., get_sapi_v1_lending_auto_invest_plan_list, get_sapi_v1_lending_auto_invest_all_asset). No prerequisites or alternatives are mentioned despite the crowded tool namespace.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_index_user_summaryBInspect
Query Index Linked Plan Position Details(USER_DATA) — Details on users Index-Linked plan position details Weight(IP): 1 Returns: { indexId: number, totalInvestedInUSD: string, currentInvestedInUSD: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| indexId | Yes | query parameter: indexId (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limiting ('Weight(IP): 1') and provides a helpful preview of the return structure since no output schema exists. However, it lacks explicit mention of authentication requirements or safety characteristics (read-only nature).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat redundant ('Details... Details') and crams multiple elements (purpose, weight, return values) into a single dense sentence. While not excessively long, the repetition and lack of formatting reduce clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the inclusion of return field examples ('indexId', 'totalInvestedInUSD') is valuable compensation. However, for a financial data tool with authentication requirements (evidenced by 'signature' parameter and 'USER_DATA' tag), the description should explicitly confirm this requires signed API access.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description does not add additional semantic context for parameters (e.g., explaining that 'signature' requires HMAC SHA256, or what 'recvWindow' represents), but none is needed given the complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Query') and resource ('Index Linked Plan Position Details'), and distinguishes this from sibling tools like 'index_info' by emphasizing 'users' and tagging it as 'USER_DATA'. However, it is somewhat redundant in phrasing ('Details... Details on users').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this versus related endpoints like 'get_sapi_v1_lending_auto_invest_plan_list' or 'get_sapi_v1_lending_auto_invest_index_info'. There are no prerequisites or alternative suggestions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_one_off_statusBInspect
Query One-Time Transaction Status (USER_DATA) — Transaction status for one-time transaction Weight(IP): 1 Returns: { transactionId: number, status: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| requestId | No | query parameter: requestId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| transactionId | Yes | query parameter: transactionId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight (Weight(IP): 1) and return structure ({ transactionId, status }), which is valuable given the lack of output schema. However, it fails to explicitly state authentication requirements (though implied by USER_DATA and signature parameter) or describe possible status values (e.g., PENDING, SUCCESS, FAILED).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but contains redundancy ('Transaction status for one-time transaction' largely repeats the first clause). Technical metadata (USER_DATA, Weight(IP)) is interspersed with functional description rather than being separated. The return value documentation is useful but slightly awkwardly appended.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates by specifying the return fields (transactionId, status). However, it lacks workflow context (this is a polling endpoint for async one-time investments) and doesn't enumerate status possibilities. For a tool requiring cryptographic signatures and specific timestamps, additional behavioral context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score applies. The description does not add semantic meaning beyond the schema (e.g., it doesn't clarify the difference between requestId and transactionId, or explain that signature requires HMAC-SHA256). The schema adequately documents each parameter's type and basic purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the verb (Query) and resource (One-Time Transaction Status) for this lending auto-invest tool. The mention of 'one-time transaction' helps distinguish it from recurring plan tools (e.g., get_sapi_v1_lending_auto_invest_plan_list) in the sibling list. However, it doesn't explicitly clarify the relationship with post_sapi_v1_lending_auto_invest_one_off (the creation endpoint).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention the prerequisite of having a transactionId from a previously executed one-time investment. There is no mention of workflow (e.g., 'use this after creating a one-time investment to check completion').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_plan_idBInspect
Query holding details of the plan — Query holding details of the plan Weight(IP): 1 Returns: { planValueInUSD: string, planValueInBTC: string, pnlInUSD: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| planId | No | query parameter: planId (number) | |
| requestId | No | query parameter: requestId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the IP rate limit weight ('Weight(IP): 1') and documents the return structure with specific fields, which is valuable behavioral context. However, it lacks information about authentication requirements, error conditions, or whether the data is real-time vs cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from immediate repetition ('Query holding details of the plan — Query holding details of the plan'). While it efficiently packs the return structure and rate limit info into one line, the tautology wastes space. The dash-separated structure is logical but poorly executed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description appropriately documents the return fields. However, it fails to address the mismatch between the endpoint name (suggesting plan_id is central) and the schema (where planId is not required), and doesn't mention what happens if planId is omitted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description does not add additional semantic context beyond the schema (e.g., it doesn't clarify that planId appears optional in the schema despite being in the endpoint name, or explain when to use requestId).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Query holding details of the plan' with specific return fields (planValueInUSD, planValueInBTC, pnlInUSD), indicating it retrieves investment plan holdings. However, it repeats the same phrase twice ('Query holding details of the plan — Query holding details of the plan'), which is redundant but doesn't obscure the purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings like get_sapi_v1_lending_auto_invest_plan_list (which lists plans) or get_sapi_v1_lending_auto_invest_history_list. It does not mention that planId is typically required to identify the target plan, despite the endpoint name suggesting specific plan retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_plan_listCInspect
[DISCOVERY] Get list of plans — Query plan lists Weight(IP): 1 Returns: { planValueInUSD: string, planValueInBTC: string, pnlInUSD: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| planType | Yes | query parameter: planType (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses rate limiting ('Weight(IP): 1') and hints at the response structure with specific return fields. However, it lacks safety indicators (read-only status), pagination behavior, or authentication requirements beyond the signature parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is cluttered with metadata noise ('[DISCOVERY]'), contains tautological phrasing ('Get list of plans — Query plan lists'), and buries the return value documentation in an inline JSON snippet. Poor structure and front-loading of irrelevant tags reduces clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list endpoint with 4 parameters and no output schema, the description is incomplete. It fails to specify valid values for the planType parameter (critical for filtering), omits pagination details, and provides no context on the maximum number of plans returned or time range constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all four parameters having basic descriptions in the schema (e.g., 'UTC timestamp in ms'). The description itself adds no additional semantic meaning for parameters, but since the schema is complete, it meets the baseline. No compensation needed for the poorly described planType enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action ('Get list of plans') and identifies the resource (auto-invest plans via the endpoint name), but is somewhat tautological ('Get list' vs 'Query plan lists'). It mentions specific return fields (planValueInUSD, pnlInUSD) which helps, but fails to distinguish from the sibling endpoint get_sapi_v1_lending_auto_invest_plan_id which retrieves a single plan by ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this list endpoint versus the single-plan retrieval endpoint (get_sapi_v1_lending_auto_invest_plan_id) or the history endpoint (get_sapi_v1_lending_auto_invest_history_list). No prerequisites or filtering guidance mentioned despite the presence of a planType parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_rebalance_historyCInspect
Index Linked Plan Rebalance Details (USER_DATA) — Get the history of Index Linked Plan Redemption transactions Max 30 day difference between startTime and endTime If no startTime and endTime, default to show past 30 day records Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden but fails to clarify what data structure is returned, what constitutes a 'rebalance' event, or authentication requirements beyond the 'USER_DATA' label. The rebalance/redemption confusion further undermines behavioral clarity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one dense block lacking punctuation between distinct facts (e.g., 'transactions Max 30 day difference'), making it difficult to parse. Information is front-loaded but poorly structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should explain the return values and clearly distinguish from sibling tools. It fails on both counts due to the rebalance/redemption confusion and absence of response field documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage documenting individual parameters, the description adds valuable cross-parameter constraints: the 30-day maximum difference between startTime and endTime, and default behavior when time parameters are omitted.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description contradicts itself by stating both 'Rebalance Details' and 'Redemption transactions' for the same tool. Since 'rebalance' and 'redemption' are distinct operations with separate endpoints (evident in sibling tools), this confusion makes it unclear which specific operation this tool retrieves.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides operational constraints (30-day max window between startTime/endTime, defaults to past 30 days) and rate limit weight (1), but offers no guidance on when to use this versus the similar 'redeem_history' endpoint or other lending auto-invest tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_redeem_historyBInspect
Index Linked Plan Redemption History (USER_DATA) — Get the history of Index Linked Plan Redemption transactions Max 30 day difference between startTime and endTime If no startTime and endTime, default to show past 30 day records Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| requestId | Yes | query parameter: requestId (number) | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and includes the rate limit 'Weight(IP): 1' and data classification '(USER_DATA)'. It discloses the 30-day time window constraint which affects query behavior. However, it fails to describe the response format, pagination behavior beyond the time window, or whether the redemption data includes pending versus completed transactions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description packs multiple constraints into a single string with minimal punctuation, making it dense but somewhat difficult to parse quickly. While it avoids unnecessary verbosity by omitting generic fluff, the lack of clear sentence breaks between 'transactions', 'Max', and 'If' hampers readability. The information is front-loaded with the resource type, but the structural flow could be improved with proper formatting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 9-parameter schema with no output specification, the description adequately covers the critical business logic constraints regarding time ranges that an agent must know to formulate valid queries. It appropriately signals the USER_DATA classification indicating authentication requirements. However, the absence of return value documentation or examples leaves gaps in understanding the tool's complete utility, though this is partially mitigated by the comprehensive input schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline understanding of individual parameters. The description adds value by explaining the relationship between startTime and endTime parameters regarding the 30-day maximum difference and default fallback behavior. It does not, however, clarify the business purpose of the requestId parameter or provide examples of valid query combinations beyond the schema's basic field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Get' and the specific resource 'history of Index Linked Plan Redemption transactions', distinguishing it from sibling tools like get_sapi_v1_lending_auto_invest_rebalance_history by specifying 'Redemption'. The parenthetical '(USER_DATA)' signals the private nature of the data. However, it assumes familiarity with 'Index Linked Plan' without contextual explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides specific operational constraints including the 'Max 30 day difference between startTime and endTime' and the default behavior to 'show past 30 day records' when no time range is specified. However, it lacks explicit guidance on when to use this tool versus similar history endpoints such as get_sapi_v1_lending_auto_invest_history_list. No prerequisites or authentication flow guidance is mentioned beyond the USER_DATA label.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_source_asset_listCInspect
[DISCOVERY] Query source asset list (USER_DATA) — Query Source Asset to be used for investment Weight(IP): 1 Returns: { feeRate: string, sourceAssets: { sourceAsset: string, assetMinAmount: string, assetMaxAmount: string, scale: string, flexibleAmount: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| indexId | No | query parameter: indexId (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| usageType | Yes | query parameter: usageType (string) | |
| recvWindow | No | The value cannot be greater than 60000 | |
| targetAsset | No | query parameter: targetAsset (string) | |
| flexibleAllowedToUse | No | query parameter: flexibleAllowedToUse (boolean) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of disclosing behavior. It successfully indicates authentication requirements via '(USER_DATA)' and documents the return structure (feeRate, sourceAssets array) compensating for the missing output schema. However, it lacks explicit read-only/safety declarations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from poor structure with metadata pollution ('[DISCOVERY]', 'Weight(IP): 1') and a dense JSON return type crammed into the text. While information-dense, it requires parsing through technical artifacts to extract meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 7-parameter complexity and lack of output schema, the description partially compensates by documenting return fields. However, the missing title and messy presentation leave gaps in contextual completeness for an AI agent attempting to understand the auto-invest workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured schema already documents all parameters adequately. The description does not add parameter-specific semantics, meeting the baseline expectation for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool queries 'source asset list' for investment purposes, distinguishing it from the sibling 'target_asset_list' tool. However, the '[DISCOVERY]' prefix and 'Weight(IP): 1' metadata clutter the core message.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus alternative auto-invest tools (e.g., target_asset_list or plan_list). The phrase 'to be used for investment' provides only implicit context without prerequisites or workflow positioning.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_target_asset_listCInspect
[DISCOVERY] Get target asset list (USER_DATA) — Weight(IP): 1 Returns: { targetAssets: string, autoInvestAssetList: { targetAsset: string, roiAndDimensionTypeList: { simulateRoi: string, dimensionValue: string, dimensionUnit: string }[] }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| targetAsset | No | query parameter: targetAsset (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden and partially succeeds: it discloses Weight(IP): 1 for rate limiting, marks it as USER_DATA (authenticated), and provides an inline sketch of the return structure. However, it omits pagination behavior details, error conditions, and whether the endpoint is idempotent or has side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense block cramming metadata tags [DISCOVERY], parentheticals (USER_DATA), weight information, and a JSON-like return structure into one line. While compact, the poor formatting and lack of sentence structure significantly hinder readability and parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description attempts to compensate for the lack of output schema by inline documentation of the nested return structure (autoInvestAssetList, roiAndDimensionTypeList, etc.), which is valuable. However, it lacks explanation of pagination mechanics despite having pagination parameters, and doesn't clarify the relationship between targetAssets and autoInvestAssetList fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (size, current, targetAsset, etc.), the schema sufficiently documents parameters. The description adds no parameter semantics beyond the schema, meeting the baseline expectation but providing no additional context about valid asset formats or pagination strategies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the core action 'Get target asset list' and identifies it as USER_DATA, but fails to distinguish from numerous siblings like get_sapi_v1_lending_auto_invest_source_asset_list or get_sapi_v1_lending_auto_invest_all_asset. It restates the resource but doesn't clarify the specific scope or business context of 'target assets' within auto-invest workflows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives, prerequisites for calling it, or workflow context. The '[DISCOVERY]' tag hints at usage but lacks explicit explanation of what discovery means in this context or when this should be invoked.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_auto_invest_target_asset_roi_listCInspect
[DISCOVERY] Get target asset ROI data (USER_DATA) — ROI return list for target asset Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| hisRoiType | Yes | query parameter: hisRoiType (string) | |
| recvWindow | No | The value cannot be greater than 60000 | |
| targetAsset | Yes | query parameter: targetAsset (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It notes '(USER_DATA)' indicating authenticated access and includes 'Weight(IP): 1' suggesting rate limiting, but fails to explain what the ROI data represents (historical vs. projected), the time horizons available via 'hisRoiType', or error behaviors when invalid target assets are requested.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core action, but wastes space with implementation metadata ('[DISCOVERY]', 'Weight(IP): 1') that is irrelevant to an AI agent's decision-making. These internal API documentation artifacts should be removed or contextualized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description inadequately explains what the tool returns. It states 'ROI return list' but doesn't describe the data structure, time periods covered, or how to interpret the results, leaving significant gaps for an agent trying to fulfill user queries about investment performance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with explicit parameter names and types, establishing a baseline of 3. The description mentions 'target asset' and 'ROI' which map to the 'targetAsset' and 'hisRoiType' parameters, but adds no semantic detail about valid values for 'hisRoiType' (e.g., 'FIVE_YEAR') beyond what the schema examples already provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get') and resource ('target asset ROI data') with specific scope ('USER_DATA'). However, it includes cryptic metadata tags like '[DISCOVERY]' and 'Weight(IP): 1' that don't aid agent comprehension, and it could better distinguish from the sibling 'target_asset_list' tool by clarifying this returns historical performance metrics rather than just asset availability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'get_sapi_v1_lending_auto_invest_target_asset_list' or 'get_sapi_v1_lending_auto_invest_plan_list'. The '(USER_DATA)' hint implies authentication requirements but doesn't explicitly state prerequisites like needing an active auto-invest plan or valid API keys.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_project_listCInspect
[DISCOVERY] Get Fixed/Activity Project List(USER_DATA) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| type | Yes | query parameter: type ("ACTIVITY" | "CUSTOMIZED_FIXED") | |
| asset | No | query parameter: asset (string) | |
| sortBy | No | Default `START_TIME` | |
| status | No | Default `ALL` | |
| current | No | Current querying page. Start from 1. Default:1 | |
| isSortAsc | No | default "true" | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It provides useful rate-limiting context ('Weight(IP): 1') and hints at authentication needs ('USER_DATA'), but fails to describe return format, pagination behavior, or what distinguishes 'ACTIVITY' from 'CUSTOMIZED_FIXED' projects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief, but includes metadata cruft ('[DISCOVERY]', 'Weight(IP): 1') that reduces signal-to-noise ratio. The core descriptive content is front-loaded but diluted by technical tags.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters, no output schema, and no annotations, the description is insufficient. It lacks explanation of return values, error conditions, or the business logic governing 'Fixed' versus 'Activity' projects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds no parameter-specific guidance beyond what the schema already provides (e.g., it doesn't explain the relationship between 'type' and other filters or pagination strategy).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the specific action 'Get' and resource 'Fixed/Activity Project List', distinguishing it from sibling position-listing endpoints. However, it assumes familiarity with what 'Fixed/Activity' lending projects are without defining them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'USER_DATA' label implies authentication requirements (consistent with the signature parameter), but there is no explicit guidance on when to use this versus the position list endpoint or other lending tools, nor prerequisites for the 'type' parameter values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_lending_project_position_listCInspect
[DISCOVERY] Get Fixed/Activity Project Position (USER_DATA) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| asset | Yes | query parameter: asset (string) | |
| status | No | Default `ALL` | |
| projectId | No | query parameter: projectId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full disclosure burden. It indicates authentication requirements via 'USER_DATA' and rate limiting via 'Weight(IP): 1', but fails to mention pagination behavior, cache policies, error scenarios, or whether this includes historical or only active positions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded, but includes non-semantic metadata ('[DISCOVERY]', '— Weight(IP): 1') that reduces signal-to-noise ratio for an AI agent attempting to understand the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter financial data endpoint with no output schema provided, the description is inadequate. It fails to explain domain concepts (what 'Fixed' vs 'Activity' projects are), the data structure returned, or how to handle the required signature/timestamp authentication flow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline score. The description itself adds no semantic meaning beyond the schema, which minimally documents parameters using tautological phrases like 'query parameter: asset (string)' rather than explaining valid asset formats or the relationship between asset, projectId, and status filters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' and identifies the resource as 'Fixed/Activity Project Position', distinguishing it from sibling 'get_sapi_v1_lending_project_list' (which lists available projects) by specifying 'Position' (user holdings). However, the '[DISCOVERY]' tag and 'Weight(IP): 1' metadata obscure the core purpose for an AI agent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'get_sapi_v1_lending_project_list' (available projects) or other lending position endpoints. There are no prerequisites, filtering recommendations, or workflow context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_borrow_historyAInspect
Get Crypto Loans Borrow History (USER_DATA) — - If startTime and endTime are not sent, the recent 90-day data will be returned. - The max interval between startTime and endTime is 180 days. Weight(IP): 400 Returns: { rows: { orderId: number, loanCoin: string, initialLoanAmount: string, hourlyInterestRate: string, loanTerm: string, collateralCoin: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | default 10, max 100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| orderId | No | orderId in POST /sapi/v1/loan/borrow | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden effectively. It documents the return structure inline (rows array with field types), specifies rate limits, and details default temporal behavior. It indicates USER_DATA scope implying authentication requirements without explicitly stating read-only safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the action and resource, but suffers from formatting issues (em-dash immediately followed by bullet dash: '— -') that impair readability. The return structure documentation, while valuable, creates a dense block that could be better structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates effectively by documenting the return JSON structure (rows containing orderId, loanCoin, etc., plus total count). For a 10-parameter data retrieval tool with 100% input schema coverage, this provides sufficient completeness for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds value by specifying the 90-day default window and 180-day maximum interval for startTime/endTime parameters, semantic constraints not evident in the schema's 'UTC timestamp in ms' descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get Crypto Loans Borrow History' with specific resource identification. It distinguishes from siblings like 'repay_history' or 'ongoing_orders' through the 'Borrow History' specificity, though it could explicitly clarify the difference from 'flexible' loan borrow history (v2 endpoint) to achieve a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides concrete time-range constraints (90-day default, 180-day max interval) and rate limit data (Weight(IP): 400), which aids usage. However, it lacks explicit guidance on when to use this versus the flexible loan borrow history endpoint or VIP loan endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_collateral_dataAInspect
Get Collateral Assets Data (USER_DATA) — Get LTV information and collateral limit of collateral assets. The collateral limit is shown in USD value. Weight(IP): 400 Returns: { rows: { collateralCoin: string, initialLTV: string, marginCallLTV: string, liquidationLTV: string, maxLimit: string, vipLevel: number }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| vipLevel | No | Defaults to user's vip level | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and successfully discloses critical behavioral traits: the IP rate limit weight ('Weight(IP): 400'), the authentication type ('USER_DATA'), and the complete return structure including the nested object schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the action and resource. It efficiently packs the rate limit, currency denomination (USD), and return schema into a single statement with minimal waste, though the inline JSON return structure is slightly raw.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and a structured output schema, the description compensates well by inline-documenting the return type structure, rate limits, and data semantics (USD valuation), providing sufficient context for a data retrieval endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (all 5 parameters have descriptions), the baseline is met. However, the description provides no additional semantic context about parameter interactions (e.g., how 'vipLevel' defaults work) beyond what the schema already specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'Collateral Assets Data' including 'LTV information and collateral limit of collateral assets' with values shown 'in USD'. However, it does not explicitly distinguish this standard loan endpoint from the sibling 'get_sapi_v1_loan_vip_collateral_data' VIP variant.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the '(USER_DATA)' tag implies authentication requirements, there is no explicit guidance on when to use this tool versus the VIP collateral data endpoint or other loan-related tools, nor any stated prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_incomeAInspect
Get Crypto Loans Income History (USER_DATA) — - If startTime and endTime are not sent, the recent 7-day data will be returned. - The max interval between startTime and endTime is 30 days. Weight(UID): 6000
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | All types will be returned by default. * `borrowIn` * `collateralSpent` * `repayAmount` * `collateralReturn` - Collateral return after repayment * `addCollateral` * `removeCollateral` * `collateralReturnAfterLiquidation` | |
| asset | No | query parameter: asset (string) | |
| limit | No | default 20, max 100 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully communicates rate limit weight (6000) and time window constraints, but fails to disclose what 'income' encompasses (interest, fees, collateral returns) or describe the return format/pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the essential purpose. Every sentence provides actionable information (constraints, rate limits). Minor formatting awkwardness with the em-dash and bullet points prevents a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should ideally explain what income records are returned and their structure. With 8 parameters and complex financial data, the description is adequate but leaves a significant gap regarding return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds valuable constraint logic not present in the schema: the 7-day default behavior and 30-day maximum interval for startTime/endTime parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('Crypto Loans Income History') that clearly distinguishes this tool from siblings like get_sapi_v1_loan_borrow_history and get_sapi_v1_loan_repay_history. The '(USER_DATA)' tag appropriately signals authentication requirements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides specific operational constraints (7-day default window, 30-day max interval, Weight: 6000) but lacks explicit guidance on when to use this versus alternative loan history endpoints like borrow_history or repay_history.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_loanable_dataBInspect
Get Loanable Assets Data (USER_DATA) — Get interest rate and borrow limit of loanable assets. The borrow limit is shown in USD value. Weight(IP): 400 Returns: { rows: { loanCoin: string, _7dHourlyInterestRate: string, _7dDailyInterestRate: string, _14dHourlyInterestRate: string, _14dDailyInterestRate: string, _30dHourlyInterestRate: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| loanCoin | No | Coin loaned | |
| vipLevel | No | Defaults to user's vip level | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Provides valuable behavioral context missing from annotations: Weight(IP): 400 (rate limit cost) and detailed return structure schema. However, lacks disclosure on authentication requirements, caching behavior, or error conditions that would be expected with no annotations provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded but densely packed into run-on structure. Technical metadata (Weight, Returns JSON) appended to descriptive sentence reduces readability. Could benefit from separating description, behavior notes, and return schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for lack of structured output schema by embedding full return JSON structure in description. Covers core functional context (what data is returned, in what currency) and rate limiting. Adequate for a read-only data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (loanCoin, vipLevel, etc.), establishing baseline. Description adds no parameter-specific context beyond schema, focusing instead on return values. No compensation needed for schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Get) and resource (interest rate and borrow limit of loanable assets). Includes '(USER_DATA)' tag indicating authenticated endpoint. However, does not explicitly differentiate from sibling 'get_sapi_v1_loan_vip_loanable_data' or flexible loan variants.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus similar loan data endpoints (VIP, flexible, collateral). Does not mention prerequisites like authentication requirements despite USER_DATA tag, or when this data is needed vs other loan endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_ltv_adjustment_historyAInspect
Get Loan LTV Adjustment History (USER_DATA) — If startTime and endTime are not sent, the recent 90-day data will be returned. The max interval between startTime and endTime is 180 days. Weight(IP): 400 Returns: { rows: { loanCoin: string, collateralCoin: string, direction: string, amount: string, preLTV: string, afterLTV: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | default 10, max 100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| orderId | No | Order ID | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant burden and delivers well: it discloses the IP weight (400), default time range behavior, max interval limits, and provides an inline JSON structure of the return value including field types (loanCoin, preLTV, afterLTV, etc.). It implies read-only access via 'Get' but could be explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with purpose first, followed by constraints, rate limit, and return type. The inline JSON return schema is dense but justified by the absence of a formal output_schema field. No wasted words, though the return structure string could benefit from spacing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description compensates effectively by including rate limits, return structure, and time constraints. It could improve by mentioning pagination behavior (it has limit/current params and returns a 'total' field but doesn't explain the pagination mechanism) and explicitly stating this is read-only.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage providing baseline descriptions (e.g., 'UTC timestamp in ms'), the description adds crucial semantic constraints not in the schema: the 90-day default behavior and 180-day maximum interval between startTime and endTime, which significantly aids correct parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (Loan LTV Adjustment History) with the USER_DATA scope tag. However, it does not explicitly differentiate from the sibling `get_sapi_v2_loan_flexible_ltv_adjustment_history` or explain that this applies to standard (non-flexible) loans, leaving that distinction to the tool name alone.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides specific temporal constraints (90-day default, 180-day max interval) which imply usage patterns, but it lacks explicit guidance on when to use this tool versus the flexible loan version (v2) or other loan history endpoints like borrow/repay history.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_ongoing_ordersBInspect
Get Loan Ongoing Orders (USER_DATA) — Weight(IP): 300 Returns: { rows: { orderId: number, loanCoin: string, totalDebt: string, residualInterest: string, collateralCoin: string, collateralAmount: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | default 10, max 100 | |
| current | No | Current querying page. Start from 1; default:1, max:1000 | |
| orderId | No | orderId in POST /sapi/v1/loan/borrow | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the rate limit weight (300) and security classification (USER_DATA), and hints at the response structure including pagination (rows array with total count). However, it lacks explicit confirmation that this is read-only/safe or details on pagination limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense line combining purpose, weight, and return schema. While efficient with no wasted words, the cramming of JSON-like return syntax into the description reduces readability. It is appropriately front-loaded with the action but could benefit from formatting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the inline documentation of return fields (orderId, loanCoin, totalDebt, etc.) provides necessary context. However, the description is incomplete regarding the relationship between this and other loan order types (VIP, flexible), which is essential contextual information for correct tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (limit, current, orderId, loanCoin, etc.), the schema documents all parameters adequately. The description adds no parameter-specific semantics, but this is acceptable given the comprehensive schema coverage, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action (Get) and resource (Loan Ongoing Orders), and specifies the return structure with field names like totalDebt and collateralCoin. However, it fails to distinguish from sibling endpoints like get_sapi_v1_loan_vip_ongoing_orders or get_sapi_v2_loan_flexible_ongoing_orders, which is critical given the multiple loan types available.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this standard loan endpoint versus the VIP (get_sapi_v1_loan_vip_ongoing_orders) or flexible (get_sapi_v2_loan_flexible_ongoing_orders) variants. No prerequisites or filtering recommendations are mentioned beyond the implicit USER_DATA tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_repay_collateral_rateBInspect
Check Collateral Repay Rate (USER_DATA) — Get the the rate of collateral coin / loan coin when using collateral repay, the rate will be valid within 8 second. Weight(IP): 6000 Returns: { loanCoin: string, collateralCoin: string, repayAmount: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| loanCoin | Yes | Coin loaned | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| repayAmount | Yes | repay amount of loanCoin | |
| collateralCoin | Yes | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully communicates the high rate-limit cost (Weight(IP): 6000) and the temporal validity of results (8 seconds). However, it omits authentication requirements despite the USER_DATA tag, and lacks error condition or cache behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Packs multiple technical details (purpose, validity window, weight, return structure) into a single sentence efficiently. However, it contains a typo ('the the'), and the return value description is dense and incomplete ('...'), slightly undermining the professional structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial calculation tool without a structured output schema, the description partially compensates by listing return fields. However, it fails to explain the business logic (what the 'rate' mathematically represents, e.g., collateral/loan ratio) or the collateral repayment workflow context, leaving gaps in an agent's understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all 6 parameters including loanCoin, collateralCoin, and repayAmount. The description adds no additional semantic context for parameters (e.g., explaining the relationship between input repayAmount and output), meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves the exchange rate between collateral and loan coins specifically for collateral repayment scenarios. The phrase 'when using collateral repay' effectively distinguishes it from sibling tools like get_sapi_v1_loan_collateral_data or get_sapi_v1_loan_ongoing_orders. However, it could more explicitly differentiate from general loan market data endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage timing guidance through the 'valid within 8 second' constraint, suggesting this must be called immediately before repayment. However, it lacks explicit workflow guidance such as directing users to post_sapi_v1_loan_repay for execution, or clarifying when to use this versus checking general loan terms.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_repay_historyAInspect
Get Loan Repayment History (USER_DATA) — If startTime and endTime are not sent, the recent 90-day data will be returned. The max interval between startTime and endTime is 180 days. Weight(IP): 400 Returns: { rows: { loanCoin: string, repayAmount: string, collateralCoin: string, collateralUsed: string, collateralReturn: string, repayType: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | default 10, max 100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| orderId | No | Order ID | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses rate limiting (Weight: 400), default time-window behavior (90 days), max query span (180 days), and includes a complete return structure schema showing nested fields (rows containing loanCoin, repayAmount, collateralUsed, etc.).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense and front-loaded: purpose first, followed by time constraints, rate limit, and return structure. No wasted words; every clause provides actionable information about behavior or constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no structured output schema field, the description provides complete return type documentation (rows array structure with field types). Combines with 100% input schema coverage and behavioral details (rate limits, time constraints) to fully specify the tool's contract.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description adds critical semantic context for startTime/endTime parameters beyond the schema's 'UTC timestamp' definition, specifying the 90-day default behavior and 180-day maximum interval constraint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb-resource pair 'Get Loan Repayment History' and includes the '(USER_DATA)' scope tag. It clearly distinguishes from sibling tools like get_sapi_v1_loan_borrow_history and get_sapi_v1_loan_ongoing_orders by specifying 'Repayment' history specifically.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit time-range constraints: default 90-day behavior when startTime/endTime are omitted and maximum 180-day interval limits. Includes rate limit weight (400). Does not explicitly name alternative tools (e.g., VIP version), but clearly defines operational boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_vip_collateral_accountAInspect
Check Locked Value of VIP Collateral Account (USER_DATA) — VIP loan is available for VIP users only. Weight(IP): 6000 Returns: { rows: { collateralAccountId: string, collateralCoin: string, collateralValue: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| orderId | No | Order id | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralAccountId | No | query parameter: collateralAccountId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It compensates partially by specifying the IP weight (6000) and detailing the JSON return structure ('Returns: { rows: ... }'). However, it fails to explicitly declare the read-only/safe nature of the operation (implied by 'Check' but not stated) or mention idempotency, which is essential behavioral context for a data retrieval tool lacking annotation hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently packed into a single line with no redundant filler. It front-loads the primary action, follows with eligibility constraints, and ends with technical metadata (weight and return type). While information-dense, the lack of punctuation between clauses (e.g., between '6000' and 'Returns') slightly hinders readability, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema in the structured metadata, the description adequately compensates by explicitly documenting the return format ('rows' array with specific fields). It also includes the VIP eligibility warning and rate limit weight. It is complete enough for invocation, though it could improve by explaining what 'Locked Value' represents (e.g., total collateral value vs. available collateral).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (all 5 parameters documented), establishing a baseline of 3. The description adds minimal semantic value beyond the schema itself, merely echoing the 'collateralAccountId' parameter name without clarifying its relationship to the 'Locked Value' being queried or providing format examples beyond what the schema already contains.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check') and resource ('Locked Value of VIP Collateral Account'). It identifies the target user segment ('VIP users only') and includes the USER_DATA classification. However, it does not explicitly distinguish this from the similar sibling tool 'get_sapi_v1_loan_vip_collateral_data', leaving ambiguity about when to use 'account' versus 'data' endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides the critical eligibility constraint that 'VIP loan is available for VIP users only', which is a necessary prerequisite. It also includes the IP Weight (6000), informing rate limit considerations. However, it lacks explicit guidance on when to use this tool versus related VIP loan endpoints (e.g., 'get_sapi_v1_loan_vip_ongoing_orders') or what 'Locked Value' specifically refers to in the loan lifecycle.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_vip_collateral_dataBInspect
Get Collateral Asset Data (USER_DATA) — Get collateral asset data. Weight(IP): 400 Returns: { rows: { collateralCoin: string, _1stCollateralRatio: string, _1stCollateralRange: string, _2ndCollateralRatio: string, _2ndCollateralRange: string, _3rdCollateralRatio: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden. It provides valuable rate limiting info ('Weight(IP): 400') and explicitly details the return structure (rows array with collateralCoin, _1stCollateralRatio fields, total count). However, it doesn't state that this is a safe read-only operation or explain pagination behavior implied by the 'total' field.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense line efficiently packs the title, purpose, rate limit weight, and return schema. While information-rich, it could benefit from spacing between the conceptual description and technical return type specification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for lack of output schema by explicitly documenting the return JSON structure (rows array with specific collateral ratio/range fields, total count). Covers authentication context (USER_DATA) and rate limits. Complete enough for a data retrieval tool with 4 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (signature, timestamp, recvWindow, collateralCoin all documented). The description adds no parameter semantics, but with complete schema coverage, this meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Get Collateral Asset Data') and identifies the resource as USER_DATA. The 'VIP' in the tool name distinguishes it from the non-VIP sibling get_sapi_v1_loan_collateral_data, though the description itself doesn't explicitly clarify what VIP means.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this VIP version versus the standard get_sapi_v1_loan_collateral_data or flexible loan versions. No mention of prerequisites like requiring VIP account status or authentication, despite the USER_DATA tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_vip_loanable_dataBInspect
Get Loanable Assets Data — Get interest rate and borrow limit of loanable assets. The borrow limit is shown in USD value. Weight(IP): 400 Returns: { total: number, rows: { loanCoin: string, _flexibleDailyInterestRate: string, _flexibleYearlyInterestRate: string, _30dDailyInterestRate: string, _30dYearlyInterestRate: string, _60dDailyInterestRate: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| loanCoin | No | Coin loaned | |
| vipLevel | No | Defaults to user's vip level | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limiting ('Weight(IP): 400') and provides a detailed inline return schema showing the nested structure including interest rate fields (flexible, 30d, 60d) and total/rows format, which compensates for the lack of a formal output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action but combines multiple elements (title, description, weight, return schema) into a dense single block. While information-rich, the inline JSON return structure reduces readability; line breaks between conceptual sections would improve structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with full schema coverage and no structured output schema, the description adequately completes the picture by documenting the return structure and rate limit weight. For a read-only data retrieval tool, this provides sufficient context for invocation without surprises.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description implies the purpose of parameters (filtering by loanCoin or vipLevel) but does not add semantic context beyond what the schema already provides, such as explaining that omitting loanCoin returns all loanable assets.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'interest rate and borrow limit of loanable assets' with borrow limits 'shown in USD value.' While specific and actionable, it does not explicitly distinguish when to use this VIP-specific endpoint versus the non-VIP sibling 'get_sapi_v1_loan_loanable_data', relying on the tool name to convey the VIP scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives like the standard loanable data endpoint, nor does it explain when to filter by 'loanCoin' versus retrieving all assets. The description provides no prerequisites or workflow context for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_vip_ongoing_ordersAInspect
Get VIP Loan Ongoing Orders (USER_DATA) — VIP loan is available for VIP users only. Weight(IP): 400 Returns: { rows: { orderId: number, loanCoin: string, totalDebt: string, residualInterest: string, collateralAccountId: string, collateralCoin: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 10; max 100. | |
| current | No | Current querying page. Start from 1. Default:1 | |
| orderId | No | Order id | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral | |
| collateralAccountId | No | query parameter: collateralAccountId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description compensates by disclosing 'Weight(IP): 400' (rate limiting), 'USER_DATA' (auth requirement), and detailed JSON return structure inline since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely dense and front-loaded: leads with action, includes auth type, eligibility constraint, rate limit, and return schema in minimal space with no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter tool with no output schema, the description adequately covers return structure, pagination (via schema), rate limits, and VIP-specific context. Minor gap in explicit pagination behavior or error condition documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description does not add parameter-specific semantics beyond the schema, but the inline return documentation clarifies what fields are populated in results.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with resource 'VIP Loan Ongoing Orders' and clearly distinguishes from sibling 'get_sapi_v1_loan_ongoing_orders' via the 'VIP' prefix and constraint 'VIP loan is available for VIP users only'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage constraint ('VIP users only') but lacks explicit comparison to sibling tools (e.g., when to use this vs get_sapi_v1_loan_ongoing_orders) or prerequisites beyond the VIP status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_vip_repay_historyAInspect
Get VIP Loan Repayment History (USER_DATA) — VIP loan is available for VIP users only. Weight(IP): 400 Returns: { rows: { loanCoin: string, repayAmount: string, collateralCoin: string, repayStatus: string, repayTime: string, orderId: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 10; max 100. | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| orderId | No | Order id | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It effectively discloses rate limit ('Weight(IP): 400') and provides complete return structure schema inline since no output_schema field exists. Indicates authentication requirement via '(USER_DATA)' tag.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently packs essential metadata (purpose, access restriction, rate limit, return type) into minimal space. Front-loaded with action. Inline JSON return structure is dense but necessary given absence of formal output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter financial query tool with no annotations and no output schema, the description compensates effectively by providing the response structure, rate limit weight, and access constraints. Lacks only explicit pagination behavior description (though 'total' field in return hints at it).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (all 9 parameters documented), establishing baseline of 3. Description focuses on return value structure rather than adding parameter semantics, which is acceptable given comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with resource 'VIP Loan Repayment History' and explicitly distinguishes from non-VIP loan tools (sibling 'get_sapi_v1_loan_repay_history' exists) by stating 'VIP loan is available for VIP users only' and tagging '(USER_DATA)'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear prerequisite constraint 'VIP loan is available for VIP users only' indicating when to use this tool. However, it does not explicitly name the non-VIP alternative (get_sapi_v1_loan_repay_history) for users who don't meet the VIP criterion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_vip_request_dataCInspect
Query Application Status (USER_DATA) — Get Application Status Weight(UID): 400 Returns: { total: number, rows: { loanAccountId: string, orderId: string, requestId: string, loanCoin: string, loanAmount: string, collateralAccountId: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| current | No | Current querying page. Start from 1. Default:1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It successfully discloses the rate limit weight (Weight(UID): 400) and documents the response structure via the embedded JSON return type. However, it lacks critical safety disclosures (read-only status, idempotency) and behavioral context like pagination limits or data freshness that an annotation-less mutation/query tool should provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but poorly structured as a dense run-on sentence. It contains tautology ('Query Application Status... Get Application Status') and crams distinct concepts (purpose, weight, return type) without clear separation. The redundancy and lack of formatting reduce readability despite brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and no formal output schema, the description compensates partially by documenting the return structure inline. However, for a financial tool with numerous siblings, it lacks critical differentiation (what constitutes a 'request' vs an 'order') and omits the Title field entirely. It meets minimum viability but leaves significant gaps for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all 5 parameters (limit, current, signature, timestamp, recvWindow). The description itself adds no parameter-specific semantics, relying entirely on the structured schema. This meets the baseline expectation for high-coverage schemas but provides no additional contextual guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Query Application Status' but fails to specify what 'Application' refers to (VIP loan requests) without inferring from the tool name or return fields. While the JSON return snippet clarifies the resource involves loans (loanAccountId, loanCoin), the core purpose statement is vague and tautologically repetitive ('Query... — Get...'). It does not actively distinguish from sibling loan tools like get_sapi_v1_loan_vip_ongoing_orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus related VIP loan endpoints (e.g., when to query request_data vs ongoing_orders or repay_history). The USER_DATA tag implies authentication requirements, but explicit prerequisites, filters, or exclusion criteria are absent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_loan_vip_request_interest_rateCInspect
Get Borrow Interest Rate (USER_DATA) — Get borrow interest rate. Weight(UID): 400
| Name | Required | Description | Default |
|---|---|---|---|
| loanCoin | No | Max 10 assets, Multiple split by "," | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It mentions 'Weight(UID): 400' (rate limit cost) and 'USER_DATA' (auth scope), but fails to describe what data is returned (current rates? request-specific rates?), whether the operation is idempotent, or any side effects. The VIP-specific behavior implied by the tool name is not explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (one sentence) and front-loaded with the action, but suffers from tautology ('Get Borrow Interest Rate... Get borrow interest rate'). The em-dash structure is appropriate, but the repetition reduces information density without adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex financial domain (crypto lending), lack of output schema, and absence of annotations, the description is insufficient. It omits VIP context, return value structure, and the relationship between 'request' in the name and the interest rate data retrieved. For a 4-parameter authenticated endpoint, more behavioral context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'UTC timestamp in ms', 'Max 10 assets'). Since the schema fully documents parameter semantics, the description baseline is 3. The description adds no supplemental parameter context (e.g., that `loanCoin` filters results, or behavior when omitted), so it does not exceed the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action (Get Borrow Interest Rate) and resource, but fails to distinguish this from sibling tools like `get_sapi_v1_loan_vip_loanable_data` or `get_sapi_v1_loan_vip_request_data`. It omits the crucial 'VIP' domain context present in the tool name, leaving ambiguity about whether this applies to standard or VIP loans.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance. While '(USER_DATA)' implicitly signals authentication is required, there is no explicit when-to-use guidance, no mention of prerequisites (e.g., VIP status), and no comparison to alternative loan data endpoints. The agent cannot determine when this tool is preferable to other loan interest rate tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_account_snapshotAInspect
Managed sub-account snapshot (For Investor Master Account) — - The query time period must be less then 30 days - Support query within the last one month only - If startTime and endTime not sent, return records of the last 7 days by default Weight(IP): 2400 Returns: { code: number, msg: string, snapshotVos: { data: { balances: { asset: unknown, free: unknown, locked: unknown }[], totalAssetOfBtc: string }, type: string, updateTime: number }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | "SPOT", "MARGIN"(cross), "FUTURES"(UM) | |
| Yes | Sub-account email | ||
| limit | No | min 7, max 30, default 7 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden effectively by disclosing the rate limit weight (2400), strict time window constraints, and the complete return structure including nested objects (snapshotVos, balances). It does not explicitly state read-only safety, though this is implied by the 'get_' prefix.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the purpose, followed by constraints and return schema. However, formatting is inconsistent (mixing em-dashes and hyphens) and the inline JSON return structure, while necessary due to missing output schema, is dense and impacts readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description adequately compensates by including the return type structure, rate limit costs, and temporal constraints. It covers the essential behavioral gaps for an 8-parameter managed sub-account tool, though it could benefit from explicit permission or authentication context notes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds valuable behavioral context for `startTime` and `endTime` parameters (default 7-day behavior when omitted and 30-day limits), but does not significantly elaborate on other parameters like `signature` or `recvWindow` beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource (managed sub-account snapshot) and specific context (For Investor Master Account), distinguishing it from non-managed sub-account tools and other managed sub-account endpoints (like asset or margin queries). However, it lacks an explicit action verb (e.g., 'Retrieve' or 'Fetch'), relying on the implicit 'snapshot' noun.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific temporal constraints (30-day max window, last month only, 7-day default) that guide when the tool can be used effectively. However, it lacks explicit guidance on when to use this versus sibling tools like `get_sapi_v1_managed_subaccount_asset` or `get_sapi_v1_managed_subaccount_info`.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_assetCInspect
Managed sub-account asset details(For Investor Master Account) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It provides rate limiting information ('Weight(IP): 1'), but fails to declare the read-only nature of the operation, response format, pagination behavior, or caching characteristics that would be essential for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose. It avoids redundancy with the structured schema data. However, given the tool's complexity (managed investments, authentication requirements), it is arguably too terse to be maximally useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial data retrieval tool with no output schema and no annotations, the description is inadequate. It fails to describe what data is returned (specific assets, balances, currencies), error conditions, or the relationship between the Investor Master Account and the sub-account email parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (email, signature, timestamp, recvWindow are all documented), so the baseline score applies. The description adds no additional parameter semantics beyond what the schema provides, but the schema is self-sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource (Managed sub-account asset details) and target audience (For Investor Master Account), using specific terminology. However, it does not explicitly distinguish from sibling managed subaccount tools like `get_sapi_v1_managed_subaccount_margin_asset` or clarify what 'asset details' specifically entails (balances, holdings, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase '(For Investor Master Account)' provides implicit context about required permissions, but there is no explicit guidance on when to use this tool versus alternatives like `get_sapi_v1_managed_subaccount_account_snapshot` or `get_sapi_v1_managed_subaccount_fetch_future_asset`, nor any prerequisites or exclusions stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_deposit_addressAInspect
Get Managed Sub-account Deposit Address (For Investor Master Account) — Get investor's managed sub-account deposit address Weight(UID): 1 Returns: { coin: string, address: string, tag: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Coin name | |
| Yes | query parameter: email (string) | ||
| network | No | query parameter: network (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully adds rate limit information ('Weight(UID): 1') and return value structure ('Returns: { coin: string, address: string, tag: string, ... }'). However, it omits idempotency characteristics, error scenarios, and whether the address is persistent or generated per request.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the key action, but contains redundancy ('Get Managed Sub-account Deposit Address... Get investor's managed sub-account deposit address'). The technical appendices (Weight, Returns) are valuable but could be better structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter financial API tool with no output schema, the description partially compensates by documenting the return structure. However, it lacks explanation of the 'managed sub-account' concept, network parameter usage patterns, and the relationship between the email parameter and master account permissions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, providing baseline documentation for all 6 parameters. The description adds minimal semantic context beyond the schema, merely implying through the title that the 'email' parameter refers to the managed sub-account's email.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (Managed Sub-account Deposit Address) and distinguishes from regular sub-account tools by specifying 'Managed' and 'For Investor Master Account'. However, it doesn't explicitly contrast with the similar sibling get_sapi_v1_capital_deposit_sub_address or explain what makes a sub-account 'managed'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'For Investor Master Account' provides implicit context about the required permission level, but there is no explicit guidance on when to use this tool versus get_sapi_v1_capital_deposit_sub_address or prerequisites for the managed sub-account feature.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_fetch_future_assetBInspect
Query Managed Sub-account Futures Asset Details (For Investor Master Account) — Investor can use this api to query managed sub account futures asset details Returns: { code: number, message: string, snapshotVos: { type: string, updateTime: number, data: { assets: { asset: unknown, marginBalance: unknown, walletBalance: unknown }[], position: { symbol: unknown, entryPrice: unknown, markPrice: unknown, positionAmt: unknown }[] } }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | query parameter: email (string) | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates partially by including the complete return structure (JSON schema) inline, revealing the nested data model (snapshotVos, assets, positions). However, it omits critical behavioral details such as authentication requirements, rate limiting, error handling, or whether the operation is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from redundancy ('Query... Details' followed by 'query managed sub account futures asset details'). While front-loaded with the purpose, the inclusion of the raw JSON return structure, though valuable, creates a dense, single-block text that could be better structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description adequately compensates by documenting the return structure inline. However, it fails to explain the semantics of the nested fields (e.g., what snapshotVos represents, the distinction between marginBalance and walletBalance), leaving gaps in contextual understanding for a 4-parameter tool with complex return data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (email, signature, timestamp, recvWindow are all documented), establishing a baseline of 3. The description itself adds no additional semantic context for the parameters (e.g., what specific email format is expected, or that signature requires HMAC-SHA256), relying entirely on the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Query') and resource ('Managed Sub-account Futures Asset Details'), and specifies the scope ('For Investor Master Account'). It distinguishes itself from sibling tools like get_sapi_v1_managed_subaccount_asset and get_sapi_v1_managed_subaccount_margin_asset by explicitly targeting 'Futures' assets, though it doesn't clarify differences from the snapshot variant.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it identifies the required role ('Investor Master Account'), it provides no explicit guidance on when to use this tool versus closely related siblings (e.g., get_sapi_v1_managed_subaccount_asset for spot assets or get_sapi_v1_managed_subaccount_account_snapshot for historical data), nor does it mention prerequisites or constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_infoBInspect
Query Managed Sub-account List (For Investor) — Get investor's managed sub-account list. Weight(UID): 60 Returns: { total: number, managerSubUserInfoVoList: { rootUserId: number, managersubUserId: number, bindParentUserId: number, email: string, insertTimeStamp: number, bindParentEmail: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| Yes | query parameter: email (string) | ||
| limit | No | Default 500; max 1000. | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight ('Weight(UID): 60') and provides the complete return structure inline, which is valuable. However, it lacks explicit declaration of read-only safety, error conditions, or idempotency that would help an agent understand side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the action but contains slight tautology ('Query... Get...'). The inline JSON return structure is information-dense and slightly reduces readability, though it serves a necessary purpose given the lack of output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates effectively by documenting the return structure (total count and managerSubUserInfoVoList fields). For a 6-parameter query tool with 100% schema coverage, this provides sufficient context, though mentioning the read-only nature would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description does not add additional semantic context for parameters (e.g., explaining that 'email' refers to the investor's email, or how to generate the 'signature'), but the schema is self-sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Query', 'Get') and clearly identifies the resource ('Managed Sub-account List'). The '(For Investor)' suffix effectively distinguishes this tool from sibling transaction log tools like get_sapi_v1_managed_subaccount_query_trans_log_for_investor and get_sapi_v1_managed_subaccount_query_trans_log_for_trade_parent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the '(For Investor)' label provides implicit context about the user role, there is no explicit guidance on when to use this tool versus alternatives like get_sapi_v1_sub_account_list (regular subaccounts) or the transaction log variants. No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_margin_assetBInspect
Query Managed Sub-account Margin Asset Details (For Investor Master Account) — Investor can use this api to query managed sub account margin asset details Returns: { marginLevel: string, totalAssetOfBtc: string, totalLiabilityOfBtc: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | query parameter: email (string) | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It implies read-only behavior via the verb 'Query' and partially compensates for the missing output schema by listing key return fields (marginLevel, totalAssetOfBtc, totalLiabilityOfBtc). However, it lacks disclosure on rate limits, error conditions, data freshness, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that contains tautology ('Query... Investor can use this api to query...'), which is redundant. However, the appended 'Returns:' clause efficiently packs useful return value information. The structure is front-loaded but could be tighter by eliminating the restatement of the action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and an output schema, the description compensates reasonably well by specifying the return structure with key financial fields. It also clarifies the specific domain (margin assets vs other types). While it could elaborate on the meaning of 'marginLevel' or liability calculations, it meets the minimum viable threshold for a 4-parameter query tool by providing critical return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (email, timestamp, signature, recvWindow all documented). The description adds no additional semantic context for these parameters (e.g., that 'email' refers to the sub-account email, or format details for signature), but the baseline score of 3 is appropriate given the schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Query') and clearly identifies the resource ('Managed Sub-account Margin Asset Details'). It also specifies the required caller context ('For Investor Master Account'). However, it does not explicitly differentiate from the similar sibling tool 'get_sapi_v1_managed_subaccount_asset' (likely for spot assets), leaving ambiguity about when to use this specific endpoint versus the general asset endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description identifies who can use the tool ('Investor', 'Master Account') but provides no guidance on when to use it versus alternatives, prerequisites for the sub-account setup, or conditions where this query would fail. With multiple sibling tools for managed sub-accounts (spot assets, futures assets, snapshots), the absence of comparative guidance is a significant gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_query_trans_logCInspect
Query Managed Sub Account Transfer Log (For Trading Team Sub Account)(USER_DATA) — Query Managed Sub Account Transfer Log (For Trading Team Sub Account) Weight(UID): 60 Returns: { count: number, managerSubTransferHistoryVos: { fromEmail: string, fromAccountType: string, toEmail: string, toAccountType: string, asset: string, amount: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| limit | No | Default 500; max 1000. | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| transfers | Yes | Transfer Direction | |
| recvWindow | No | The value cannot be greater than 60000 | |
| transferFunctionAccountType | Yes | Transfer function account type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses rate limit weight 'Weight(UID): 60' and marks as 'USER_DATA' (implying auth requirements). Provides inline return structure schema since no output_schema exists. However, lacks details on error conditions, pagination behavior (cursor vs offset), or idempotency given it's a log query.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Suffers from significant repetition: the phrase 'Query Managed Sub Account Transfer Log (For Trading Team Sub Account)' appears twice. The return schema and weight information are crammed into a dense block rather than structured clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output_schema by documenting return fields (count, managerSubTransferHistoryVos with nested fields). Covers rate limiting. However, fails to explain the relationship between the three similar transfer log endpoints or provide pagination guidance for the 9-parameter query.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (e.g., 'Default 1' for page, 'UTC timestamp in ms' for time fields). The description adds no additional parameter semantics beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Query) and resource (Managed Sub Account Transfer Log) and distinguishes scope with '(For Trading Team Sub Account)', differentiating it from sibling tools like '...for_investor' and '...for_trade_parent'. However, the title and description are nearly identical, creating redundancy rather than adding clarification about what constitutes a 'Trading Team Sub Account'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the sibling variants (for_investor, for_trade_parent). Does not mention prerequisites for the 'transferFunctionAccountType' parameter or when pagination (page/limit) is necessary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_query_trans_log_for_investorAInspect
Query Managed Sub Account Transfer Log (For Investor Master Account) — Investor can use this api to query managed sub account transfer log. This endpoint is available for investor of Managed Sub-Account. A Managed Sub-Account is an account type for investors who value flexibility in asset allocation and account application, while delegating trades to a professional trading team. Weight(IP): 1 Returns: { count: number, managerSubTransferHistoryVos: { fromEmail: string, fromAccountType: string, toEmail: string, toAccountType: string, asset: string, amount: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| Yes | query parameter: email (string) | ||
| limit | No | Default 500; max 1000. | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| transfers | No | Transfer Direction (FROM/TO) | |
| recvWindow | No | The value cannot be greater than 60000 | |
| transferFunctionAccountType | No | Transfer function account type (SPOT/MARGIN/ISOLATED_MARGIN/USDT_FUTURE/COIN_FUTURE) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden well by disclosing IP rate limit weight (1), return structure schema (count, managerSubTransferHistoryVos with field types), and user role restrictions. It implies read-only safety through 'Query' verb but doesn't explicitly state it is non-destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the key action but includes a verbose definition of Managed Sub-Account ('an account type for investors who value flexibility...') that doesn't aid tool selection. The inline return schema is information-dense but necessary given no output schema exists.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 10-parameter endpoint with no output schema, the description compensates well by documenting the return structure inline and specifying investor-role restrictions. It could be improved by explicitly contrasting with the 'trade_parent' sibling to prevent role-confusion errors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema documents parameters adequately (e.g., 'UTC timestamp in ms', 'Transfer Direction (FROM/TO)'). The description adds no additional parameter semantics beyond what the schema provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool queries 'Managed Sub Account Transfer Log' with specific scope 'For Investor Master Account', clearly distinguishing it from sibling 'for_trade_parent' variant. Specific verb (Query) + resource (Transfer Log) + user role (Investor) provides complete clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies this is 'available for investor of Managed Sub-Account' and defines what that account type is, implying correct usage context. However, it lacks explicit guidance on when to use the sibling 'for_trade_parent' variant instead, or other alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_managed_subaccount_query_trans_log_for_trade_parentAInspect
Query Managed Sub Account Transfer Log (For Trading Team Master Account) — Trading team can use this api to query managed sub account transfer log. This endpoint is available for trading team of Managed Sub-Account. A Managed Sub-Account is an account type for investors who value flexibility in asset allocation and account application, while delegating trades to a professional trading team Weight(IP): 60 Returns: { count: number, managerSubTransferHistoryVos: { fromEmail: string, fromAccountType: string, toEmail: string, toAccountType: string, asset: string, amount: string, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| Yes | query parameter: email (string) | ||
| limit | No | Default 500; max 1000. | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| transfers | No | Transfer Direction (FROM/TO) | |
| recvWindow | No | The value cannot be greater than 60000 | |
| transferFunctionAccountType | No | Transfer function account type (SPOT/MARGIN/ISOLATED_MARGIN/USDT_FUTURE/COIN_FUTURE) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully documents the IP rate limit weight (60) and provides an inline JSON structure of the return values (count, managerSubTransferHistoryVos fields), which is crucial given the lack of an output schema. It does not disclose error conditions, pagination cursor behavior, or authentication requirements beyond the signature parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose but becomes cluttered. It includes a verbose definition of Managed Sub-Accounts ('an account type for investors who value flexibility...') and crams the return schema into a single JSON block at the end. While every sentence adds some value, the structure could be tighter and the account type definition condensed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description compensates effectively by documenting the return structure inline. It also covers the IP weight for rate limiting. With 100% input schema coverage and 10 parameters, the description provides sufficient context for invocation, though it omits details on error handling and pagination mechanics (despite including page/limit parameters).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description does not add semantic details about specific parameters (e.g., explaining the email format, signature generation, or time ranges), but the schema is self-sufficient. The description focuses on return values rather than input parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action (Query), resource (Managed Sub Account Transfer Log), and specific scope (For Trading Team Master Account). It distinguishes itself from the sibling 'for_investor' variant by explicitly stating 'Trading team can use this api' and 'available for trading team'. However, it does not clarify the distinction from the sibling 'get_sapi_v1_managed_subaccount_query_trans_log' (without suffix), leaving ambiguity about which log tool to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage context by stating it is for trading teams of Managed Sub-Accounts and defines what those accounts are. However, it lacks explicit when-to-use guidance, exclusion criteria, or comparison to the generic 'query_trans_log' sibling tool. The user must infer applicability from the 'trading team' qualifier alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_accountBInspect
Query Cross Margin Account Details (USER_DATA) — Weight(IP): 10 Returns: { created: boolean, borrowEnabled: boolean, marginLevel: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limit weight (10), authentication requirements (USER_DATA), and return value structure (created, borrowEnabled, etc.), but fails to explicitly state safety properties (read-only), error conditions, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently packs the purpose, weight, auth category, and return structure. While information-rich, the embedded JSON return snippet slightly reduces readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter query tool with no output schema, the description compensates partially by listing return fields. However, lacking annotations, it should explicitly state this is a safe read operation and clarify error scenarios to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow all documented). The description adds no parameter-specific guidance, but with complete schema coverage, this meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Query) and resource (Cross Margin Account Details), specifying this is for cross margin as opposed to isolated margin or spot accounts. However, it does not explicitly contrast with sibling tools like get_sapi_v1_margin_isolated_account to guide selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes technical metadata '(USER_DATA)' and 'Weight(IP): 10' but provides no explicit guidance on when to use this tool versus alternatives like the isolated margin account endpoint or spot account endpoints. It lacks 'when-to-use' or 'prerequisites' context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_all_assetsCInspect
[DISCOVERY] Get All Margin Assets (MARKET_DATA) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| asset | Yes | query parameter: asset (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the rate limit weight 'Weight(IP): 1' and categorizes the endpoint as 'MARKET_DATA', implying read-only behavior. However, it lacks details about response format, pagination, or error conditions that would be expected without annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the action. However, the bracketed metadata '[DISCOVERY]' and parenthetical '(MARKET_DATA)' create a fragmented structure that reduces readability. It is appropriately sized for a single-parameter endpoint but could be cleaner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is no output schema and no annotations, the description should explain what data is returned (e.g., asset details, marginability status). It only states the rate limit and action, leaving the response structure undocumented for a tool that queries financial data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('query parameter: asset (string)'), establishing a baseline of 3. The main description does not add additional semantic context about what the asset parameter represents (e.g., specific ticker format) or provide examples beyond what the schema already contains.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Get All Margin Assets' which provides a clear verb and resource. It distinguishes itself from sibling tools like get_sapi_v1_margin_account or get_sapi_v1_margin_all_pairs by focusing specifically on assets. The '[DISCOVERY]' and 'MARKET_DATA' tags add context about the endpoint category.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives like get_sapi_v1_margin_account or get_sapi_v1_capital_config_getall. It does not mention prerequisites (e.g., needing margin account access) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_all_order_listBInspect
[DISCOVERY] Query Margin Account's all OCO (USER_DATA) — Retrieves all OCO for a specific margin account based on provided optional parameters Weight(IP): 200
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default Value: 500; Max Value: 1000 | |
| fromId | No | If supplied, neither `startTime` or `endTime` can be provided | |
| symbol | No | Mandatory for isolated margin, not supported for cross margin | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses rate limit weight ('Weight(IP): 200') and authentication type ('USER_DATA'), but lacks details on pagination behavior, response structure, or error cases despite having no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence contains redundant clauses ('Query...' — 'Retrieves...') and metadata noise ('[DISCOVERY]'). The rate limit weight is useful but awkwardly appended. Functional but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for identifying the tool's basic function, but lacks explanation of what OCO (One Cancels Other) means, what data is returned (array of orders?), or how to handle pagination with fromId/limit. Given 9 parameters and no output schema, more context on the response would be valuable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description mentions 'based on provided optional parameters' but adds no semantic meaning beyond the schema's existing documentation of limit, fromId, symbol, etc.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Query'/'Retrieves') and resource (Margin Account OCO orders), and specifies 'all' which distinguishes it from sibling 'open' order list tools. However, it doesn't explicitly clarify the scope difference (historical vs. active) from get_sapi_v1_margin_open_order_list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like get_sapi_v1_margin_open_order_list or get_sapi_v1_margin_order_list. It mentions 'optional parameters' but doesn't explain filtering strategy or prerequisites like requiring margin account privileges.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_all_ordersAInspect
Query Margin Account's All Orders (USER_DATA) — - If orderId is set, it will get orders >= that orderId. Otherwise most recent orders are returned. - For some historical orders cummulativeQuoteQty will be < 0, meaning the data is not available at this time. Weight(IP): 200 Request Limit: 60 times/min per IP
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| orderId | No | Order id | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds well: it explains pagination behavior (orderId >= logic), data quality quirks (cummulativeQuoteQty < 0), and rate limits (Weight: 200, 60 req/min). It does not explicitly state read-only nature, though implied by 'Query'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information density is high with every sentence earning its place (purpose, pagination, data quirks, rate limits, prerequisites). However, formatting is awkward (em-dash followed by bullet points) and could be structured more cleanly for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description appropriately focuses on input constraints and operational behavior. However, for a data retrieval tool, it omits description of what fields/structure are returned in the response, which would be necessary for complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds valuable semantic context about the orderId parameter specifically (used for pagination with >= logic) and default behavior when omitted (returns most recent), which is not evident from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Query) and resource (Margin Account's All Orders) with the USER_DATA tag indicating private data scope. However, it does not explicitly differentiate from similar sibling tools like get_sapi_v1_margin_open_orders or get_sapi_v1_margin_order in the text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite workflow ('First call exchange info or symbol listing...'), which is strong actionable guidance. Lacks explicit 'when-not-to-use' differentiation from alternatives like the open_orders endpoint, but the prerequisite adds significant value.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_all_pairsCInspect
[DISCOVERY] Get All Cross Margin Pairs (MARKET_DATA) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the rate limit weight ('Weight(IP): 1') and categorizes the endpoint as 'MARKET_DATA', but fails to address authentication requirements, response format, pagination, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely terse and front-loaded with metadata tags ([DISCOVERY], MARKET_DATA). While efficient, the bracketed metadata slightly obscures the human-readable portion, though every segment conveys distinct operational information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is insufficient for a data retrieval tool. It fails to describe what data is returned (e.g., leverage info, trading status, pair symbols) or how to interpret the response, leaving critical gaps for an invoking agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for its single parameter ('symbol'), establishing a baseline score of 3. The description itself adds no additional semantic context about the parameter's expected format or usage beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'Cross Margin Pairs' with a specific verb ('Get'), clearly identifying the resource and distinguishing it from isolated margin or other trading pair siblings. However, it does not clarify the apparent tension between the 'All Pairs' naming and the required 'symbol' parameter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_sapi_v1_margin_isolated_all_pairs' or 'get_sapi_v1_margin_account'. It lacks prerequisites, exclusions, or contextual triggers for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_available_inventoryBInspect
Query Margin Available Inventory (USER_DATA) — Margin available Inventory query Weight(UID): 50 Returns: { assets: { MATIC: string, STPT: string, TVK: string, SHIB: string }, updateTime: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | query parameter: type ("MARGIN" | "ISOLATED") | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the rate limit weight (50) and return schema structure, which is valuable. However, it fails to mention authentication requirements (beyond the USER_DATA parenthetical), caching behavior, or whether this represents inventory available for borrowing, lending, or trading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence fragment containing the purpose, weight limit, and return schema. Slightly redundant ('Query... query') and run-on, but efficiently packs technical details. The return structure inline is helpful though unconventional.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a query tool with three parameters and no output schema, the description adequately documents the return structure inline. However, it lacks explanation of what 'available inventory' means in the margin context (e.g., available for borrowing vs collateral) and omits error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (type, timestamp, signature all documented). The description adds no parameter-specific context, but with complete schema coverage, no compensation is needed. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it queries 'Margin Available Inventory' and includes the USER_DATA tag. It distinguishes itself from sibling margin tools by specifying the exact return structure with example asset fields (MATIC, STPT, TVK, SHIB), clarifying this retrieves specific available asset balances rather than orders, trades, or account status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus other margin inventory/account endpoints (like get_sapi_v1_margin_account or get_sapi_v1_margin_max_borrowable). No mention of prerequisites or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_borrow_repayBInspect
Query borrow/repay records in Margin account(USER_DATA) — Query borrow/repay records in Margin account - txId or startTime must be sent. txId takes precedence. Response in descending order - If an asset is sent, data within 30 days before endTime; If an asset is not sent, data within 7 days before endTime - If neither startTime nor endTime is sent, the recent 7-day data will be returned. - startTime set as endTime - 7 days by default, endTime set as current time by default Weight(IP): 10 Returns: { rows: { isolatedSymbol: string, amount: string, asset: string, interest: string, principal: stri
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| txId | No | tranId in POST /sapi/v1/margin/loan | |
| type | Yes | BORROW or REPAY | |
| asset | Yes | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| isolatedSymbol | No | Isolated symbol |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses response ordering (descending), rate limit Weight(IP): 10, and time window behavior. Missing explicit safety declaration (read-only) though implied by 'Query.' Return value description is truncated ('principal: stri').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Suffers from redundant opening sentence repeated with an em-dash. Uses disjointed dash-separated clauses. Ends abruptly mid-word ('stri'). The contradiction regarding asset requirement creates confusion.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Attempts to describe return structure but is cut off. Provides adequate filtering logic for an 11-parameter endpoint, but the asset parameter contradiction and missing output schema description leave significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema has 100% coverage, the description adds critical constraint logic (txId/startTime requirement, precedence). However, it contradicts the schema regarding the 'asset' parameter: schema marks it required, but description states 'If an asset is sent... If an asset is not sent...' implying optionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Query) and resource (borrow/repay records in Margin account) clearly. Repetitive phrasing ('Query... — Query...') detracts slightly, but the USER_DATA tag and distinction from the POST sibling (post_sapi_v1_margin_borrow_repay) is clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit parameter constraints: 'txId or startTime must be sent' and 'txId takes precedence.' Details time window logic (7-day vs 30-day lookback) and defaults. Lacks explicit comparison to other margin history tools (e.g., get_sapi_v1_margin_interest_history).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_capital_flowCInspect
Get cross or isolated margin capital flow(USER_DATA) — Get cross or isolated margin capital flow Weight(IP): 100
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | query parameter: type ("TRANSFER" | "BORROW" | "REPAY" | "BUY_INCOME" | "BUY_EXPENSE" | "SELL_INCOME" | "SELL_EXPENSE" | "TRADING_COMMISSION" | "BUY_LIQUIDATION" | "SELL_LIQUIDATION" | "REPAY_LIQUIDATION" | "OTHER_LIQUIDATION" | "LIQUIDATION_FEE" | "SMALL_BALANCE_CONVERT" | "COMMISSION_RETURN" | "SMALL_CONVERT") | |
| asset | No | query parameter: asset (string) | |
| limit | No | The number of data items returned each time is limited. Default 500; Max 1000. | |
| fromId | No | If fromId is set, the data with id > fromId will be returned. Otherwise the latest data will be returned | |
| symbol | No | Required when querying isolated data | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | Only supports querying the data of the last 90 days | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full disclosure burden but provides minimal behavioral context. The '(USER_DATA)' tag indicates authentication is required and 'Weight(IP): 100' hints at rate limiting cost, but the description lacks explicit read-only safety confirmation, pagination behavior details, or data retention policies beyond the parameter-level 90-day note.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description wastefully repeats 'Get cross or isolated margin capital flow' twice with only '(USER_DATA)' and 'Weight(IP): 100' inserted between. This redundancy consumes space without adding value, and the technical tags are poorly integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema provided and 10 parameters including complex filtering (type, time ranges, pagination), the description inadequately explains what data structure or fields to expect in the response. It also fails to clarify the aggregation nature of 'capital flow' versus querying specific transaction types individually through sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured documentation carries the semantic weight. The description adds no parameter-specific context beyond what the schema already provides (e.g., it doesn't clarify the 'type' enum categories or explain that 'symbol' is conditionally required for isolated margin). Baseline 3 is appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it retrieves 'cross or isolated margin capital flow' which identifies the resource, but fails to define what 'capital flow' encompasses (transfers, borrow/repay, liquidation events, etc.) or distinguish it from siblings like get_sapi_v1_margin_transfer or get_sapi_v1_margin_borrow_repay. The scope is hinted at only by the 'type' parameter enum in the schema.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the numerous sibling margin history tools (e.g., get_sapi_v1_margin_transfer, get_sapi_v1_margin_borrow_repay). No mention of prerequisites such as margin account activation or the 90-day query limitation mentioned only in the startTime parameter schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_cross_margin_collateral_ratioBInspect
Cross margin collateral ratio (MARKET_DATA) — Weight(IP): 100
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It provides valuable rate-limiting context ('Weight(IP): 100') and categorizes the endpoint as 'MARKET_DATA', suggesting it retrieves public market information rather than private account data. However, it omits return format details, caching behavior, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is maximally efficient with zero waste: the resource name appears first, followed by categorical metadata (MARKET_DATA), and operational constraint (rate limit weight). Every element serves a distinct purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (zero parameters) and absence of output schema or annotations, the description provides the essential identification and rate limit information. However, it fails to describe the return value structure or explain what the collateral ratio metric represents, leaving gaps for an agent trying to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description correctly implies no user input is required for this retrieval operation, though it does not explicitly state 'no parameters required'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource ('Cross margin collateral ratio') and implies the retrieval action via the tool name prefix 'get_'. It distinguishes from siblings by specifying 'Cross margin' (versus isolated margin) and 'collateral ratio' (versus general margin data), though it assumes domain knowledge of what collateral ratio means.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks explicit guidance on when to select this tool versus similar siblings like 'get_sapi_v1_margin_cross_margin_data' or isolated margin alternatives. While the '(MARKET_DATA)' tag categorizes the endpoint type, there is no 'when to use' or 'when not to use' instruction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_cross_margin_dataBInspect
Query Cross Margin Fee Data (USER_DATA) — Get cross margin fee data collection with any vip level or user's current specific data as https://www.binance.com/en/margin-fee Weight(IP): 1 when coin is specified; 5 when the coin parameter is omitted
| Name | Required | Description | Default |
|---|---|---|---|
| coin | No | Coin name | |
| vipLevel | No | Defaults to user's vip level | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure. It successfully provides rate-limiting weights (1 vs 5) based on parameter usage and includes the 'USER_DATA' tag indicating authentication scope. However, it lacks disclosure of return format, pagination behavior, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single run-on sentence mixing the purpose statement, reference URL, and rate limiting details. The weight information is valuable but poorly integrated, creating a cluttered reading experience without clear sentence boundaries or prioritization of information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description should ideally characterize the returned data structure or content. It mentions 'fee data collection' but doesn't specify what fields (rates, tiers, symbols) are returned. The rate limiting context partially compensates but doesn't fully address the missing output contract.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage, the description adds valuable semantic context: it explains that vipLevel defaults to the user's current level, and crucially, that omitting the coin parameter increases the rate limit weight from 1 to 5. This parameter interaction effect is not captured in the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Query') and resource ('Cross Margin Fee Data'), distinguishing it from isolated margin siblings. However, the phrasing is awkward with the URL reference embedded in the sentence structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., when to specify a coin vs. omit it, or when to query specific VIP levels vs. current user data). The description states what it does but not when to prefer it over similar margin data tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_delist_scheduleBInspect
Get tokens or symbols delist schedule for cross margin and isolated margin (MARKET_DATA) — Get tokens or symbols delist schedule for cross margin and isolated margin Weight(IP): 100
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting ('Weight(IP): 100') and implies read-only access via 'MARKET_DATA', but fails to mention authentication requirements or what the returned schedule contains (e.g., delist dates, asset pairs).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly repetitive, stating 'Get tokens or symbols delist schedule for cross margin and isolated margin' twice with only the trailing metadata differing. This redundancy wastes space without adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a retrieval tool with simple parameters and no output schema, the description covers the basic operational context. However, it lacks details on return format, pagination behavior, or the specific data fields returned in the delist schedule.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all three parameters (signature, timestamp, recvWindow). The description does not add parameter-specific semantics beyond the schema, which is acceptable given the complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the verb (Get), resource (tokens or symbols delist schedule), and specific scope (cross margin and isolated margin). It effectively distinguishes this tool from the sibling get_sapi_v1_spot_delist_schedule by explicitly mentioning margin types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'MARKET_DATA' tag implies read-only market data retrieval, and 'Weight(IP): 100' indicates rate limit costs. However, there is no explicit guidance on when to use this versus the spot delist schedule or other margin info tools, nor are authentication prerequisites stated despite the signature requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_exchange_small_liabilityBInspect
Get Small Liability Exchange Coin List (USER_DATA) — Query the coins which can be small liability exchange Weight(UID): 100
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It includes the rate limit 'Weight(UID): 100' and indicates USER_DATA scope (authentication required), but it fails to disclose whether the operation is read-only, idempotent, or describe the response structure/format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, compact sentence without redundancy. Technical metadata (USER_DATA, Weight) is appended efficiently, though slightly dense. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description minimally indicates the return is a 'Coin List' but does not elaborate on what fields or coin identifiers are returned. It is adequate for a simple list endpoint but leaves gaps regarding the actual response payload structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow are all documented). The description adds no additional parameter semantics beyond the schema, meeting the baseline expectation for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get', 'Query') and resource ('Small Liability Exchange Coin List'). It implicitly distinguishes from the sibling 'get_sapi_v1_margin_exchange_small_liability_history' by focusing on the current eligible coins rather than historical data, though it does not explicitly name the sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (such as the history endpoint), nor are prerequisites mentioned beyond the implicit authentication required by the signature parameter. The description lacks 'when-not-to-use' or conditional usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_exchange_small_liability_historyCInspect
Get Small Liability Exchange History (USER_DATA) — Get Small liability Exchange History Weight(UID): 100 Returns: { total: number, rows: { asset: string, amount: string, targetAsset: string, targetAmount: string, bizType: string, timestamp: number }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the JSON return structure inline (valuable since no output schema exists) and mentions the rate limit weight (100). However, it lacks details on pagination behavior, error scenarios, or what 'small liability' conversion actually entails (e.g., dust conversion to BNB).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently packs the return schema and rate limit metadata into a single sentence, but the structure is cluttered—mixing USER_DATA classification, weight metadata, and return types without clear separation. The JSON blob is useful but visually dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, embedding the return structure in the description is necessary and complete for that aspect. However, for a 7-parameter financial tool, the description lacks domain context about what constitutes a 'small liability' exchange or how pagination behaves with large historical datasets.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 7 parameters including pagination controls (current, size) and time ranges. The description adds no supplemental parameter context, but with complete schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'Small Liability Exchange History' with a clear GET verb and resource, but uses jargon ('Small Liability Exchange') without defining the business concept. It fails to distinguish from sibling tool 'get_sapi_v1_margin_exchange_small_liability' (likely the non-historical version), leaving ambiguity about whether this retrieves current state versus past records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives, prerequisites (beyond implicit USER_DATA), or rate limit implications. The 'Weight(UID): 100' mentions rate limit cost but doesn't explain what that means for caller strategy.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_force_liquidation_recBInspect
Get Force Liquidation Record (USER_DATA) — - Response in descending order Weight(IP): 1 Returns: { rows: { avgPrice: string, executedQty: string, orderId: number, price: string, qty: string, side: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| isolatedSymbol | No | Isolated symbol |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates rate limiting ('Weight(IP): 1') and response ordering ('descending order'), and crucially provides a partial output schema showing the rows/total structure. However, it omits safety profile confirmation (read-only vs. destructive), authentication scope implications, and the business logic of when liquidation records are generated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but poorly structured, cramming purpose, rate limit, ordering, and return schema into a single cluttered sentence with awkward punctuation ('— -'). While front-loaded with the action, the embedded JSON return type is difficult to parse and the formatting reduces readability. Every sentence earns its place, but the sentences are not well-formed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter tool with no output schema or annotations, the description compensates partially by documenting the return structure (rows array with fields like avgPrice, executedQty, etc.). However, it lacks domain context about liquidation events, error conditions (e.g., invalid time ranges), or the relationship between parameters (e.g., startTime/endTime filtering). It meets minimum viability but leaves significant gaps for a financial data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description does not add semantic context beyond the schema (e.g., explaining that isolatedSymbol filters to specific trading pairs, or that startTime/endTime define the liquidation event window). It implies pagination through the return structure but doesn't explicitly link the 'current' and 'size' parameters to the 'total' and 'rows' response fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies a specific verb ('Get') and resource ('Force Liquidation Record') and marks it as USER_DATA, indicating authenticated user-specific data retrieval. However, it fails to explain what constitutes a 'force liquidation' (e.g., involuntary position closure due to margin deficiency) or how this differs from general order or trade history available in sibling tools like get_sapi_v1_margin_all_orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus other margin history endpoints, nor does it mention prerequisites such as having an active margin account or isolated margin symbol requirements. The only usage hint is 'Response in descending order,' which describes behavior rather than selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_interest_historyAInspect
Get Interest History (USER_DATA) — - Response in descending order - If isolatedSymbol is not sent, crossed margin data will be returned - Set archived to true to query data from 6 months ago - type in response has 4 enums: - PERIODIC interest charged per hour - ON_BORROW first interest charged on borrow - PERIODIC_CONVERTED interest charged per hour converted into BNB - ON_BORROW_CONVERTED first interest charged on borrow converted into BNB Weight(IP): 1 Returns: { rows: { isolatedSymbol: string, asset: string, interest: string, interestAccuredTime: number, interestRate: string
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| archived | No | Default: false. Set to true for archived data from 6 months ago | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| isolatedSymbol | No | Isolated symbol |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses response ordering (descending), rate limit weight (1), and crucially documents the four response enum types (PERIODIC, ON_BORROW, etc.) which is vital given no output schema exists. It partially documents the return structure but cuts off mid-definition and omits explicit read-only declaration or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the purpose but suffers from poor formatting with run-together bullet points using '—' and dashes. The 'Returns:' section is abruptly cut off mid-string. Information is present but structurally messy, mixing request parameters, response behavior, and partial schema without clear separation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 10-parameter complexity and lack of output schema, the description adequately covers the response enum types and key parameter behaviors. However, the incomplete return documentation, lack of pagination guidance despite page/size parameters, and absence of error handling context leave gaps that prevent a higher score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds valuable semantic context beyond the schema: explaining that isolatedSymbol determines crossed vs isolated margin data, and that archived accesses 6-month-old data. It also clarifies the meaning of the 'type' field values in the response, adding significant value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get Interest History (USER_DATA)' indicating it retrieves historical interest records for the authenticated user. The mention of specific interest types (PERIODIC, ON_BORROW, etc.) clarifies this tracks interest charges incurred. However, it could better distinguish from sibling get_sapi_v1_margin_interest_rate_history, which retrieves market rates rather than user charges.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance through parameter explanations, noting that omitting isolatedSymbol returns crossed margin data and that archived queries data older than 6 months. However, it lacks explicit when-to-use guidance, alternatives for querying current vs. historical rates, or comparison to the interest rate history endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_interest_rate_historyAInspect
Margin Interest Rate History (USER_DATA) — The max interval between startTime and endTime is 30 days. Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| asset | Yes | query parameter: asset (string) | |
| endTime | No | UTC timestamp in ms | |
| vipLevel | No | Defaults to user's vip level | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting (Weight: 1) and the temporal query constraint, but omits safety classification (read-only vs destructive), authentication requirements beyond the USER_DATA label, and return format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with zero redundancy. The single sentence front-loads the purpose, followed by critical constraints. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter data retrieval tool with no output schema or annotations, the description covers the essential query constraint but lacks safety indicators (destructive/read-only hints) and return value documentation that would be necessary for complete confidence in invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100% (baseline 3), the description adds value by specifying the relationship between startTime and endTime parameters (30-day maximum interval), which is not documented in the individual parameter schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource as 'Margin Interest Rate History' with an authentication hint '(USER_DATA)', providing specific verb and resource. However, it fails to distinguish from the sibling tool 'get_sapi_v1_margin_interest_history', leaving ambiguity about whether this retrieves rate percentages or accrued interest amounts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a concrete constraint (30-day max interval between startTime and endTime) and rate limit information ('Weight(IP): 1'), but offers no guidance on when to use this versus the similar interest history endpoint or other margin data tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_isolated_accountAInspect
Query Isolated Margin Account Info (USER_DATA) — - If "symbols" is not sent, all isolated assets will be returned. - If "symbols" is sent, only the isolated assets of the sent symbols will be returned. Weight(IP): 10 Returns: { assets: { baseAsset: { asset: string, borrowEnabled: boolean, borrowed: string, free: string, interest: string, locked: string, ... }, quoteAsset: { asset: string, borrowEnabled: boolean, borrowed: string, free: string, interest: string, locked: string, ... }, symbol: string, isolatedCreated: boolean, enabled: boolean, marginLevel: string, ... }[], totalAssetOfBtc: stri
| Name | Required | Description | Default |
|---|---|---|---|
| symbols | No | Max 5 symbols can be sent; separated by ',' | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting (Weight(IP): 10), authentication type (USER_DATA), and begins documenting return structure. However, it lacks explicit safety disclosures (read-only nature is only implied by 'Query'), error handling, or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from formatting issues (awkward dash usage '— - If'), poor line breaks, and critical truncation in the Returns section ('stri' cutoff). The structure mixes purpose, parameters, weight, and returns without clear separation, and the truncated output documentation wastes space without delivering value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite being a financial data retrieval tool with complex nested return structures (baseAsset/quoteAsset), the description provides truncated return documentation. Without an output schema, the description fails to fully compensate, leaving critical return value semantics incomplete (cut off at totalAssetOfBtc).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds valuable behavioral semantics by explaining the filtering consequences: 'If symbols is not sent, all isolated assets will be returned' versus filtering to specific symbols. This adds meaning beyond the schema's syntax documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Query Isolated Margin Account Info' with the specific resource (isolated margin) and action (query). The inclusion of 'Isolated' effectively distinguishes it from siblings like get_sapi_v1_margin_account (cross margin) and post/delete_sapi_v1_margin_isolated_account.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear filtering logic for the 'symbols' parameter (with vs without), but lacks explicit guidance on when to choose this tool over alternatives like cross-margin account queries or general spot account endpoints. The USER_DATA tag implies authentication requirements but prerequisites are not stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_isolated_account_limitBInspect
Query Enabled Isolated Margin Account Limit (USER_DATA) — Query enabled isolated margin account limit. Weight(IP): 1 Returns: { enabledAccount: number, maxAccount: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the rate limit weight ('Weight(IP): 1') and return structure ('{ enabledAccount: number, maxAccount: number }'). However, it fails to explicitly state the read-only/safe nature of the operation or describe error scenarios beyond the USER_DATA classification implying authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the action and resource, but wastes space by repeating 'Query enabled isolated margin account limit' twice (once as title case, once as description). The return value and weight information are efficiently appended, but the tautology prevents a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates by explicitly documenting the return format. It also identifies this as a USER_DATA endpoint (implying signed authentication) and provides rate limiting information. For a single-purpose query tool with 100% parameter coverage, this is adequately complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all 3 parameters (signature, timestamp, recvWindow). The description adds no additional parameter semantics, but since the schema is fully self-documenting, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Query') and resource ('Enabled Isolated Margin Account Limit'), clearly distinguishing it from sibling tools like get_sapi_v1_margin_isolated_account by emphasizing 'Limit' (enabled/max counts) rather than account balances. However, it lacks explicit differentiation explaining when to use this versus the general account endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites (beyond the implicit USER_DATA tag), or conditions that would trigger errors. No explicit 'when-not-to-use' or alternative tools are mentioned despite having numerous sibling margin tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_isolated_all_pairsCInspect
[DISCOVERY] Get All Isolated Margin Symbol(USER_DATA) — Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight ('Weight(IP): 10') and hints at authentication needs via '(USER_DATA)', but fails to describe the response format, what data is returned about the symbols, or any error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (single line) and front-loaded with metadata tags ([DISCOVERY], Weight(IP)). While efficient, the structure prioritizes API implementation details over explanatory clarity, though no words are truly wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a USER_DATA endpoint accessing trading pair information, the description lacks critical context. No output schema exists, and the description fails to explain what isolated margin symbols are, what fields are returned, or how this data relates to account management.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all 4 parameters. The description adds no additional parameter context, but the baseline score of 3 applies since the schema fully documents the fields including the constraint that recvWindow cannot exceed 60000.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'Isolated Margin Symbol' data, providing a verb (Get) and resource. However, the 'All' qualifier conflicts with the required 'symbol' parameter in the schema, creating ambiguity about whether this retrieves a list or specific symbol details. It does not clearly distinguish from sibling 'get_sapi_v1_margin_all_pairs'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The '[DISCOVERY]' tag weakly implies usage for discovering available trading pairs, but there is no explicit guidance on when to use this versus 'get_sapi_v1_margin_all_pairs' or other margin tools, nor are prerequisites like authentication requirements stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_isolated_margin_dataBInspect
Query Isolated Margin Fee Data (USER_DATA) — Get isolated margin fee data collection with any vip level or user's current specific data as https://www.binance.com/en/margin-fee Weight(IP): 1 when a single is specified; 10 when the symbol parameter is omitted
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| vipLevel | No | Defaults to user's vip level | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Indicates USER_DATA scope (authentication required) and discloses IP weight costs for different call patterns. No annotations provided, so description carries full burden; misses explicit read-only safety confirmation and return format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single run-on sentence crams purpose, URL reference, and rate-limit logic together without breaks. Poor readability despite containing relevant information; structure hinders quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema provided, yet description fails to characterize returned data (fee tiers, rates, structures). For a data retrieval tool, should describe what data collection contains.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, adds valuable behavioral context: explains that omitting 'symbol' increases weight cost from 1 to 10, and clarifies vipLevel defaults to user's current level. Helps agent understand parameter interaction effects.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies specific resource (Isolated Margin Fee Data) and action (Query), distinguishing from cross-margin sibling tools. However, cluttered with URL reference ('as https://...') that obscures clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides rate-limiting weights (1 vs 10) based on symbol parameter presence, but offers no guidance on when to use this versus cross-margin data or other margin fee endpoints. No prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_isolated_margin_tierBInspect
Query Isolated Margin Tier Data (USER_DATA) — Get isolated margin tier data collection with any tier as https://www.binance.com/en/margin-data Weight(IP): 1
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| tier | No | All margin tier data will be returned if tier is omitted | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the Weight(IP): 1 (rate limiting) and USER_DATA classification (indicating authenticated access), but fails to describe the return data structure, error behaviors, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief but suffers from a garbled first sentence where the URL appears to be misplaced. The PREREQUISITE label provides good structural organization, but the main description lacks clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, yet the description does not explain what data the tool returns (e.g., leverage brackets, maintenance margin ratios). It compensates partially with the prerequisite and weight information, but gaps remain for a financial data endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'with any tier' which loosely references the tier parameter, but adds no additional semantic context about parameter formats, valid ranges, or interdependencies beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the verb (Query/Get) and resource (Isolated Margin Tier Data) but contains garbled syntax ('with any tier as https://...') that obscures meaning. It does not differentiate from the similar sibling tool get_sapi_v1_margin_isolated_margin_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The PREREQUISITE section provides useful workflow context (call exchange info first to get valid symbols), but the description lacks explicit guidance on when to use this tool versus similar margin data endpoints or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_leverage_bracketBInspect
Query Liability Coin Leverage Bracket in Cross Margin Pro Mode (MARKET_DATA) — Liability Coin Leverage Bracket in Cross Margin Pro Mode Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It provides rate limit weight 'Weight(IP): 1' which is valuable behavioral context. However, it lacks disclosure of authentication requirements, caching behavior, or what the returned leverage bracket structure contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description redundantly repeats 'Liability Coin Leverage Bracket in Cross Margin Pro Mode' twice with only 'Weight(IP): 1' added in the second clause. While front-loaded with the action, the repetition wastes tokens without adding meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists and no annotations are provided, the description should explain return values but doesn't. It adequately identifies the endpoint's domain but leaves gaps regarding response structure and authentication requirements for a complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% coverage (empty object). Per scoring rules, zero-parameter tools receive baseline score of 4. No additional parameter semantics are needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Query' and identifies the resource 'Liability Coin Leverage Bracket' with context 'Cross Margin Pro Mode'. However, it assumes familiarity with Binance-specific jargon ('Liability Coin') and doesn't differentiate from similar sibling tools like get_sapi_v1_margin_cross_margin_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternative margin data tools (e.g., isolated margin tier endpoints) or prerequisites. The mention of 'Cross Margin Pro Mode' provides implicit scoping but no explicit when-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_max_borrowableAInspect
Query Max Borrow (USER_DATA) — - If isolatedSymbol is not sent, crossed margin data will be sent. - borrowLimit is also available from https://www.binance.com/en/margin-fee Weight(IP): 50 Returns: { amount: string, borrowLimit: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | Yes | query parameter: asset (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| isolatedSymbol | No | Isolated symbol |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit weight (50 IP), the return structure ({ amount: string, borrowLimit: string }), and the authentication scope (USER_DATA). It also explains the behavioral difference between isolated and crossed margin modes based on parameter presence.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the purpose, but the formatting is inconsistent (mixing em-dashes and hyphens for bullet points) and densely packed with rate limit and return value information, making it slightly harder to parse than ideal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by explicitly defining the return values. It covers rate limiting, authentication requirements, and the key behavioral distinction between margin types, making it reasonably complete for a read-only query tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds crucial semantic context beyond the schema: it explains the crossed vs isolated margin data logic that occurs based on whether isolatedSymbol is provided, which is essential for correct usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Query Max Borrow' indicating it retrieves maximum borrowable amounts for margin trading. However, it lacks differentiation from similar sibling tools like get_sapi_v1_margin_max_transferable, leaving the agent to infer the specific use case from the name alone.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides specific parameter guidance explaining that crossed margin data is returned when isolatedSymbol is omitted, and notes that borrowLimit data is also available via an alternative URL. However, it lacks high-level guidance on when to use this tool versus other margin query tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_max_transferableAInspect
Query Max Transfer-Out Amount (USER_DATA) — - If isolatedSymbol is not sent, crossed margin data will be sent. Weight(IP): 50 Returns: { amount: string, borrowLimit: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | Yes | query parameter: asset (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| isolatedSymbol | No | Isolated symbol |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit ('Weight(IP): 50'), authentication type ('USER_DATA'), and return structure ('{ amount: string, borrowLimit: string }'), compensating for the missing output schema. It could explicitly state this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the primary action. Despite minor formatting awkwardness ('— -'), every clause serves a purpose: stating the operation, parameter logic, rate limits, and return format without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description adequately compensates by documenting the return structure and rate limits within the text. For a 5-parameter query tool, it covers the necessary behavioral gaps, though an explicit safety declaration would improve it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% coverage providing baseline documentation, the description adds crucial semantic meaning by explaining the conditional logic for 'isolatedSymbol' (crossed vs isolated margin behavior), which is not inferable from the parameter name or schema description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool queries the 'Max Transfer-Out Amount' for margin accounts, using a specific verb and resource. However, it does not distinguish from the sibling tool 'get_sapi_v1_margin_max_borrowable', which has a similar purpose but returns different data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides important usage context for the 'isolatedSymbol' parameter (defaulting to crossed margin when omitted), but lacks explicit guidance on when to use this tool versus alternatives like max_borrowable, or prerequisites for the USER_DATA endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_my_tradesAInspect
Query Margin Account's Trade List (USER_DATA) — - If fromId is set, it will get orders >= that fromId. Otherwise most recent trades are returned. Weight(IP): 10
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| fromId | No | Trade id to fetch from. Default gets most recent trades. | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limiting ('Weight(IP): 10') and pagination logic for fromId ('get orders >= that fromId. Otherwise most recent trades'). However, it omits safety classification (read-only vs. mutation), error handling behavior, and response structure details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The content is front-loaded with the primary purpose, but the formatting is awkward ('— - If') and the prerequisite section feels tacked on rather than integrated. Information density is high with minimal waste, but the structural flow is disrupted by the em-dash/hyphen combination.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters with 100% schema coverage and no output schema, the description appropriately focuses on usage prerequisites and pagination logic rather than repeating parameter definitions. It adequately covers the complexity level, though explicit mention of authentication requirements (implied by USER_DATA) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by explaining the conditional behavior between fromId and default ordering ('Otherwise most recent trades are returned'), which clarifies parameter interaction not obvious from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Query') and resource ('Margin Account's Trade List'), distinguishing it from spot trading siblings like get_api_v3_my_trades. However, it fails to differentiate 'trades' from 'orders' (vs. get_sapi_v1_margin_all_orders), and the 'USER_DATA' tag assumes API-specific knowledge without explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite workflow ('First call exchange info...'), which helps with sequencing. However, lacks explicit 'when to use vs. alternatives' guidance (e.g., when to use fromId pagination vs. time-based filtering) and doesn't warn against using without valid symbol discovery.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_next_hourly_interest_rateCInspect
Get a future hourly interest rate (USER_DATA) — Get user the next hourly estimate interest Weight(UID): 100
| Name | Required | Description | Default |
|---|---|---|---|
| assets | No | List of assets, separated by commas, up to 20 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | for isolated margin or not, "TRUE", "FALSE" | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight ('Weight(UID): 100'), which is useful behavioral context. However, it fails to describe the return format, whether the estimate is guaranteed, or any error conditions specific to margin interest calculation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief, the description is poorly structured with repetitive phrasing ('Get... Get user') and mixes technical metadata (weight limit) with the functional description without clear separation. The em-dash usage creates a run-on effect rather than efficient information delivery.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters and no output schema or annotations, the description is insufficient. It omits what the response contains (e.g., interest rate per asset, timestamp of estimate), how the 'next hour' is calculated, and doesn't reference the isolated margin option mentioned in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 5 parameters (assets, isIsolated, timestamp, signature, recvWindow) adequately. The description itself adds no parameter-specific guidance, meeting the baseline expectation when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it retrieves a 'future hourly interest rate' and 'next hourly estimate interest', which identifies the core function. However, it does not distinguish this tool from siblings like `get_sapi_v1_margin_interest_rate_history`, and the phrasing is fragmented with an awkward em-dash construction and repetitive 'Get... Get' structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives, nor any prerequisites mentioned. The '(USER_DATA)' tag implies authentication requirements but does not explicitly state when this should be called (e.g., before borrowing, for planning purposes).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_open_order_listBInspect
[DISCOVERY] Query Margin Account's Open OCO (USER_DATA) — Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Mandatory for isolated margin, not supported for cross margin | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses rate limit cost 'Weight(IP): 10' and permission scope 'USER_DATA', but lacks details on pagination, error responses, or return structure. It does not clarify behavioral differences between isolated and cross margin queries beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence format is efficient and front-loaded. The '[DISCOVERY]' prefix appears to be metadata noise, but the core content (action, resource, permission, weight) is dense and well-structured without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and no output schema, the description provides minimum viable context by noting the rate limit weight and permission type. However, it lacks description of return values, error conditions, or pagination behavior that would be necessary for full operational understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed descriptions for all 5 parameters (e.g., symbol requirements for isolated vs cross margin). The description adds no supplemental parameter guidance, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Query' and resource 'Margin Account's Open OCO', clearly distinguishing it from sibling tools like get_sapi_v1_margin_open_orders (general) and get_sapi_v1_margin_order_list. However, it assumes familiarity with 'OCO' (One Cancels Other) terminology without expanding the acronym.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus get_sapi_v1_margin_order_list or get_sapi_v1_margin_open_orders. It does not mention prerequisites like authentication requirements, though 'USER_DATA' and signature parameters imply it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_open_ordersBInspect
Query Margin Account's Open Orders (USER_DATA) — - If the symbol is not sent, orders for all symbols will be returned in an array. - When all symbols are returned, the number of requests counted against the rate limiter is equal to the number of symbols currently trading on the exchange - If isIsolated ="TRUE", symbol must be sent. Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limit weight (10) and special rate-limiting behavior when querying all symbols, which is crucial. However, it fails to explicitly declare the read-only/safe nature of the operation (implied by 'Query' but not stated).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
All information is relevant and necessary (rate limits, constraints), but the formatting is awkward with mixed em-dash and bullet styles. The dense technical constraints are appropriately included but could be structured more clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter financial API tool with no output schema, the description covers the critical gotchas (rate limit costs, isolated margin constraints). However, it lacks description of the return structure or explicit safety classification that would be expected given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds valuable semantic information about parameter interactions: the symbol/isIsolated dependency ('If isIsolated="TRUE", symbol must be sent') and the behavior when symbol is omitted. This goes beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Query Margin Account's Open Orders' with specific verb and resource. It distinguishes from spot orders via the 'Margin' specifier and 'sapi' path in the name, though it could explicitly contrast with sibling tools like get_sapi_v1_margin_all_orders (historical vs open).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage constraints: symbol omission returns all symbols with rate-limit cost implications, and isolated margin requires symbol parameter. However, lacks explicit guidance on when to choose this over get_sapi_v1_margin_all_orders or spot equivalents.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_orderAInspect
Query Margin Account's Order (USER_DATA) — - Either orderId or origClientOrderId must be sent. - For some historical orders cummulativeQuoteQty will be < 0, meaning the data is not available at this time. Weight(IP): 10 Returns: { clientOrderId: string, cummulativeQuoteQty: string, executedQty: string, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| orderId | No | Order id | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 | |
| origClientOrderId | No | Order id from client |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses Weight(IP): 10 for rate limiting, USER_DATA classification implying auth requirements, and specific data availability quirk where cummulativeQuoteQty < 0 for historical orders. Also includes partial return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information density is high with no wasted sentences, but formatting is inconsistent (mixing em-dashes and hyphens for bullets) and the PREREQUISITE section break disrupts flow slightly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by documenting return fields (clientOrderId, cummulativeQuoteQty, executedQty). Covers authentication context (USER_DATA), rate limits, and prerequisites necessary for a 7-parameter margin trading query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% providing baseline of 3. Description adds critical semantic constraint that either orderId OR origClientOrderId must be sent (mutual exclusivity with requirement), elevating it above schema-only documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Query' with exact resource 'Margin Account's Order' and distinguishes from sibling list operations (e.g., get_sapi_v1_margin_all_orders) by specifying single-order lookup requiring orderId/origClientOrderId.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit PREREQUISITE to call exchange info first for valid symbols, and mandates that either orderId or origClientOrderId must be sent. Could be improved by explicitly contrasting with get_sapi_v1_margin_open_orders for active orders.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_order_listAInspect
[DISCOVERY] Query Margin Account's OCO (USER_DATA) — Retrieves a specific OCO based on provided optional parameters - Either orderListId or origClientOrderId must be provided Weight(IP): 10 Returns: { orderListId: number, contingencyType: string, listStatusType: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Mandatory for isolated margin, not supported for cross margin | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 | |
| orderListId | No | Order list id | |
| origClientOrderId | No | Order id from client |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It adds valuable behavioral context including the rate limit 'Weight(IP): 10', the authentication scope '(USER_DATA)', and a preview of the return structure. However, it does not explicitly confirm read-only safety, error conditions, or side effects, leaving some behavioral ambiguity despite the 'Query' verb.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is dense but efficiently packed. It front-loads the purpose, includes the critical constraint, rate limit, and return structure in a single sentence. The '[DISCOVERY]' tag and dense formatting slightly hinder readability, but there is minimal waste—every clause provides distinct technical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description compensates by providing a partial return structure ('{ orderListId: number, contingencyType: string, ... }'). It covers the 7 parameters adequately through the schema and adds the critical constraint about identifier requirements. It could benefit from explaining OCO-specific behavior or error states, but it covers the essentials for a query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds significant semantic value by stating the XOR relationship between orderListId and origClientOrderId ('Either... must be provided'), which is not expressed in the schema where both appear optional. This constraint is crucial for correct API usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Retrieves a specific OCO' (One-Cancels-Other order) from a Margin Account, using specific verbs and identifying the resource. However, it does not explicitly name sibling alternatives like get_sapi_v1_margin_all_order_list or get_sapi_v1_margin_open_order_list to clarify when to use this specific tool versus those.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a critical usage constraint: 'Either `orderListId` or `origClientOrderId` must be provided.' This clarifies the parameter logic that the schema alone doesn't capture (both are marked optional in the schema). It lacks explicit 'when not to use' guidance or named alternatives, but the constraint provided is essential for correct invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_price_indexAInspect
Query Margin PriceIndex (MARKET_DATA) — Weight(IP): 10 Returns: { calcTime: number, price: string, symbol: string }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limit weight ('Weight(IP): 10') and return structure ('Returns: { calcTime, price, symbol }'), but fails to mention authentication requirements, whether this is public/market data, or the semantic meaning of the price index (mark price vs index price).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently organized with technical metadata (weight, return type) front-loaded. The prerequisite is clearly separated. No extraneous sentences, though embedding the JSON return type inline slightly reduces readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter market data query, the description is adequately complete. It compensates for the missing output schema by documenting the return format, and provides rate limiting context. Missing annotations are partially mitigated, though auth requirements would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the 'symbol' parameter fully documented in the schema (type, example, description). The description text does not mention the parameter, but with complete schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Query Margin PriceIndex' and identifies the resource type. Distinguishes from siblings by specifying 'Margin' PriceIndex, though it could better clarify how this differs from spot price endpoints like get_api_v3_ticker_price or what 'PriceIndex' specifically represents (e.g., index price for margin calculations).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite workflow: 'First call exchange info or symbol listing to discover valid trading pairs, then query market data.' This gives clear sequencing guidance. However, it does not explicitly state when NOT to use this or name specific alternatives for different price data needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_rate_limit_orderAInspect
Query Current Margin Order Count Usage (TRADE) — Displays the user's current margin order count usage for all intervals. Weight(IP): 20
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | isolated symbol, mandatory for isolated margin | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| isIsolated | No | * `TRUE` - For isolated margin * `FALSE` - Default, not for isolated margin | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit weight 'Weight(IP): 20' and scope 'for all intervals'. The '(TRADE)' tag implies authentication requirements. However, it doesn't explicitly confirm this is read-only/safe, though 'Query' implies this.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three distinct information units: operation type, functional explanation, and rate limit weight. There is minor redundancy between 'Query Current Margin Order Count Usage' and 'Displays the user's current margin order count usage', preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and absence of an output schema, the description adequately covers the tool's purpose and critical behavioral constraint (IP weight). It appropriately omits return value documentation since no output schema exists, though it could explicitly state the read-only nature given lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 5 parameters including the conditional logic for 'symbol' being mandatory for isolated margin. The description adds no parameter-specific details, which per guidelines yields a baseline score of 3 when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Query' and clearly identifies the resource as 'Current Margin Order Count Usage'. The inclusion of 'Margin' specifically distinguishes this from the sibling spot endpoint 'get_api_v3_rate_limit_order', and the '(TRADE)' tag indicates the required permission level.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description clearly identifies this as a margin-specific endpoint (implying use for margin accounts), it lacks explicit guidance on when to choose this over the spot equivalent 'get_api_v3_rate_limit_order' or other alternatives. No prerequisites or exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_trade_coeffBInspect
Get Summary of Margin account (USER_DATA) — Get personal margin level information Weight(IP): 10 Returns: { normalBar: string, marginCallBar: string, forceLiquidationBar: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Email Address | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full disclosure burden. It successfully documents the rate limit weight (Weight(IP): 10) and return structure (the three bar fields), but omits safety classification (read-only status), error behaviors, or authentication scope implications beyond the '(USER_DATA)' label.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but structurally messy, redundantly using 'Get' twice ('Get Summary... — Get personal...') and mixing concerns (purpose, weight metadata, and return schema) in a single run-on string. Every clause carries information, but the presentation lacks clean separation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and output schema, the description adequately compensates by specifying the return object structure. However, it lacks narrative context about liquidation thresholds or risk management workflows that would help an agent understand the significance of the margin level bars.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (email, signature, timestamp, recvWindow all documented), the baseline score applies. The description adds no supplemental parameter guidance (e.g., how to generate the signature or typical recvWindow values), but none is required given the complete schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool retrieves 'personal margin level information' and specifies unique return fields (normalBar, marginCallBar, forceLiquidationBar) that distinguish it from sibling margin account endpoints like get_sapi_v1_margin_account. However, it lacks explicit differentiation logic explaining when to prefer this over general margin account queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance, prerequisites, or alternative selection criteria are provided. The description does not indicate under what margin trading scenarios this tool should be invoked versus other margin-related endpoints in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_margin_transferAInspect
Get Cross Margin Transfer History (USER_DATA) — - Response in descending order - Returns data for last 7 days by default - Set archived to true to query data from 6 months ago Weight(IP): 1 Returns: { rows: { amount: string, asset: string, status: string, timestamp: number, txId: number, type: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| type | No | query parameter: type ("ROLL_IN" | "ROLL_OUT") | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| isolatedSymbol | No | Isolated symbol |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and successfully discloses: response ordering (descending), default time window (7 days), rate limit weight (IP: 1), and full return structure inline. Does not mention auth requirements or error cases, but covers operational behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense and front-loaded with purpose. Every element serves a function (ordering, time range, rate limit, returns). Minor deduction for awkward dash-bullet formatting within a single string rather than structured sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a query tool: includes inline return structure compensating for lack of formal output schema, covers pagination defaults (though not explicitly documented in schema), and rate limiting. Minor gap regarding explicit authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description mentions 'archived' parameter not present in provided schema, creating confusion. Adds context for time-range defaults (relating to startTime/endTime) but otherwise doesn't expand beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Get' and resource 'Cross Margin Transfer History', clearly distinguishing from siblings like get_sapi_v1_margin_borrow_repay or get_sapi_v1_margin_account. USER_DATA tag appropriately signals authentication scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal guidance (default 7 days vs archived for 6 months) and ordering behavior. Lacks explicit comparison to similar history tools like get_sapi_v1_margin_borrow_repay, but the time-range guidance effectively indicates when to adjust parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_hash_transfer_config_details_listBInspect
[DISCOVERY] Hashrate Resale List (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { configDetails: { configId: number, poolUsername: string, toPoolUsername: string, algoName: string, hashRate: number, startDay: number, ... }[], totalNum: number, pageSize: number } }.
| Name | Required | Description | Default |
|---|---|---|---|
| pageSize | No | Number of pages, minimum 10, maximum 200 | |
| pageIndex | No | Page number, default is first page, start form 1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and includes the rate limit 'Weight(IP): 5' and a detailed return structure showing pagination fields (totalNum, pageSize) and configDetails array. However, it lacks explicit disclosure of read-only safety, authentication requirements, or idempotency that annotations would typically provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is densely packed with metadata tags ([DISCOVERY], (USER_DATA)), rate limit info, and a JSON return structure. While efficiently condensed into a single string with no wasted words, the formatting is somewhat cryptic and could benefit from clearer separation between metadata and description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates by detailing the nested return structure including configDetails fields. However, with 5 parameters and complex mining domain logic, it could further clarify the relationship between hashrate resale configurations and the broader mining workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (pageSize limits, pageIndex starting at 1, recvWindow max). Since the schema fully documents parameter semantics, the description baseline is 3; it adds no supplemental parameter guidance (e.g., pagination strategy) but doesn't need to compensate for missing schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the tool as a 'Hashrate Resale List' endpoint with the [DISCOVERY] tag, specifying it retrieves configuration details (evidenced by the returned configDetails array containing configId, poolUsername, etc.). It distinguishes from siblings like get_sapi_v1_mining_hash_transfer_profit_details by focusing on configuration rather than profit data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as get_sapi_v1_mining_hash_transfer_profit_details or post_sapi_v1_mining_hash_transfer_config. The (USER_DATA) tag implies user-specific querying but no explicit prerequisites or exclusion criteria are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_hash_transfer_profit_detailsCInspect
Hashrate Resale Details (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { profitTransferDetails: { poolUsername: string, toPoolUsername: string, algoName: string, hashRate: number, day: number, amount: number, ... }[], totalNum: number, pageSize: number } }.
| Name | Required | Description | Default |
|---|---|---|---|
| configId | Yes | Mining ID | |
| pageSize | No | Number of pages, minimum 10, maximum 200 | |
| userName | Yes | Mining Account | |
| pageIndex | No | Page number, default is first page, start form 1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the rate limit weight ('Weight(IP): 5') and authentication class ('USER_DATA'), which is useful. It also embeds the complete return structure, hinting at the data shape. However, it lacks explicit safety information (read-only status, idempotency) that would help an agent understand this is a safe query operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the key concept, but it is densely packed into a single string mixing purpose metadata, rate limits, and return types. While not verbose, the lack of sentence breaks or structured sections reduces readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by including the JSON return structure inline, which helps agents understand the response shape. However, it fails to explain domain concepts (e.g., what 'hashrate resale' means, what 'poolUsername' represents) or provide sample values that would aid in parameter construction.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description adds no additional context about parameter relationships (e.g., that configId identifies a specific resale configuration) or pagination behavior beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource as 'Hashrate Resale Details' which suggests the domain (hashrate resale), but remains vague about what specific information is returned (profit/earnings data). It fails to distinguish from similar sibling tools like 'get_sapi_v1_mining_hash_transfer_config_details_list', leaving ambiguity about whether this retrieves configuration or profit data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor are prerequisites mentioned beyond the implicit '(USER_DATA)' tag. The description does not clarify that this retrieves historical profit details as opposed to configuring transfers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_payment_listCInspect
[DISCOVERY] Earnings List (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { accountProfits: { time: number, type: number, hashTransfer: number, transferAmount: number, dayHashRate: number, profitAmount: number, ... }[], totalNum: number, pageSize: number } }.
| Name | Required | Description | Default |
|---|---|---|---|
| algo | Yes | Algorithm(sha256) | |
| coin | No | Coin name | |
| endDate | No | Search date, millisecond timestamp, while empty query all | |
| pageSize | No | Number of pages, minimum 10, maximum 200 | |
| userName | Yes | Mining Account | |
| pageIndex | No | Page number, default is first page, start form 1 | |
| signature | Yes | Signature | |
| startDate | No | Search date, millisecond timestamp, while empty query all | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully notes the rate limit 'Weight(IP): 5' and describes the return structure (code, msg, data with accountProfits array). However, it omits pagination behavior details, error handling specifics, and cache characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Poorly structured as a single dense string mixing metadata tags ([DISCOVERY]), rate limits, and a JSON return blob. The '[DISCOVERY]' prefix appears to be internal scaffolding rather than helpful context, and the description is not front-loaded with the most critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters and no output schema, the inline documentation of the return structure (accountProfits array with fields like profitAmount, dayHashRate) provides valuable context. However, it lacks explanation of the mining domain concepts (hash transfer types, algorithm contexts) that would help an agent understand the data semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline score of 3. The description itself focuses entirely on the return structure rather than adding semantic context to parameters like 'algo' (which accepts 'sha256') or the date filtering logic between 'startDate' and 'endDate'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies the tool as an 'Earnings List' endpoint for mining data and notes it requires 'USER_DATA' authentication. However, it fails to distinguish this tool from siblings like 'get_sapi_v1_mining_payment_other' or 'get_sapi_v1_mining_payment_uid', leaving ambiguity about which specific earnings/payments each retrieves.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternative mining payment endpoints (other, uid). No prerequisites, conditions, or usage scenarios are described beyond the implicit authentication requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_payment_otherBInspect
Extra Bonus List (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { otherProfits: { time: number, coinName: string, type: number, profitAmount: number, status: number }[], totalNum: number, pageSize: number } }.
| Name | Required | Description | Default |
|---|---|---|---|
| algo | Yes | Algorithm(sha256) | |
| coin | No | Coin name | |
| endDate | No | Search date, millisecond timestamp, while empty query all | |
| pageSize | No | Number of pages, minimum 10, maximum 200 | |
| userName | Yes | Mining Account | |
| pageIndex | No | Page number, default is first page, start form 1 | |
| signature | Yes | Signature | |
| startDate | No | Search date, millisecond timestamp, while empty query all | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and provides the rate limit weight ('Weight(IP): 5') and detailed return schema structure. However, it omits explicit mention of authentication requirements, pagination behavior (though implied by pageIndex/pageSize), or data retention policies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with no filler text; front-loaded with resource type and rate limit. The inline JSON return schema is verbose but necessary given the lack of output_schema, making every element serve a functional purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a data retrieval endpoint with comprehensive schema coverage. The inline return schema compensates for the missing output_schema. However, it lacks domain context explaining what constitutes an 'extra bonus' versus standard mining payments.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 10 parameters including constraints like 'minimum 10, maximum 200'. The description adds no parameter-specific semantics beyond what the schema provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource as 'Extra Bonus List' and signals private account scope via '(USER_DATA)', distinguishing it from sibling payment endpoints (payment_list, payment_uid) by specifying 'Extra Bonus' content. However, it lacks an explicit action verb (e.g., 'Retrieves'), relying on the 'get_' prefix convention.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like get_sapi_v1_mining_payment_list or get_sapi_v1_mining_payment_uid. No mention of prerequisites such as active mining accounts or valid date ranges.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_payment_uidBInspect
Mining Account Earning (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { accountProfits: { time: number, coinName: string, type: number, puid: number, subName: string, amount: number }[], totalNum: number, pageSize: number } }.
| Name | Required | Description | Default |
|---|---|---|---|
| algo | Yes | Algorithm(sha256) | |
| endDate | No | Search date, millisecond timestamp, while empty query all | |
| pageSize | No | Number of pages, minimum 10, maximum 200 | |
| pageIndex | No | Page number, default is first page, start form 1 | |
| signature | Yes | Signature | |
| startDate | No | Search date, millisecond timestamp, while empty query all | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and successfully discloses the IP weight (5), authentication requirement (USER_DATA), and provides the complete return data structure including nested fields like accountProfits and puid.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense sentence combining metadata and a JSON return blob. While compact, the lack of formatting makes the return structure difficult to parse and the information is not well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description compensates for the missing output schema by including the return structure inline, but lacks explanation of field semantics (e.g., what 'type' or 'puid' represent) and does not address the tool's relationship to sibling mining endpoints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is met. The description adds no parameter-specific guidance beyond what the schema already documents for the 8 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the tool retrieves 'Mining Account Earning' data, but fails to clarify what 'uid' in the tool name signifies or how this differs from sibling tools like get_sapi_v1_mining_payment_list or get_sapi_v1_mining_payment_other.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternative mining payment endpoints, prerequisites for the USER_DATA permission, or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_pub_algo_listBInspect
[DISCOVERY] Acquiring Algorithm (MARKET_DATA) — Weight(IP): 1 Returns: { code: number, msg: string, data: { algoName: string, algoId: number, poolIndex: number, unit: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It provides the rate limit weight ('Weight(IP): 1') and documents the complete JSON response structure inline. However, it omits authentication requirements and whether the endpoint is public or requires API keys.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense sentence combining tags, weight info, and inline JSON. While not wasteful, the JSON blob reduces readability; front-loading the category and weight is good, but a structured format would improve clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Since no output schema exists, the inline documentation of the return structure (code, msg, data array) provides essential contractual information. However, for a complete picture, it should specify authentication requirements and whether pagination is supported.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which establishes a baseline of 4. No additional parameter documentation is required or present.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource as 'Algorithm' data with MARKET_DATA categorization, and the inline return structure clarifies it returns algorithm metadata (algoName, algoId, poolIndex). It distinguishes from siblings like get_sapi_v1_mining_pub_coin_list by focusing on algorithms rather than coins.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The '[DISCOVERY]' tag provides minimal context about when to use the tool, but there is no explicit guidance on prerequisites, alternatives, or when NOT to use it versus other mining endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_pub_coin_listBInspect
[DISCOVERY] Acquiring CoinName (MARKET_DATA) — Weight(IP): 1 Returns: { code: number, msg: string, data: { coinName: string, coinId: number, poolIndex: number, algoId: number, algoName: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the IP rate limit weight (1) and the complete return structure inline, which is valuable. However, it lacks critical behavioral context: no mention of authentication requirements (public vs private), pagination behavior, caching, or error conditions. The '[DISCOVERY]' tag hints at usage but isn't explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact (single sentence/fragment) but informationally dense and poorly structured. It mixes metadata tags ([DISCOVERY]), implementation details (Weight(IP)), and return schemas into a run-on string without clear separation, reducing readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless endpoint, the description compensates for the lack of output schema by inlining the return structure, which is helpful. However, it lacks domain context (e.g., what defines a 'mining coin', expected list size, static vs dynamic data) and doesn't reference the mining pool context implied by the URL path.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters (100% coverage trivially). Per the baseline rule for zero-parameter tools, this scores a 4. The description appropriately does not discuss parameters since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource (CoinName) and domain (MARKET_DATA/DISCOVERY) but uses the vague verb 'Acquiring' instead of specific actions like 'List' or 'Retrieve'. While it implies a list operation through the plural return type and lack of parameters, it doesn't explicitly state the scope or distinguish clearly from sibling 'get_sapi_v1_mining_pub_algo_list' except by resource name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'get_sapi_v1_mining_pub_algo_list' or other mining endpoints. No prerequisites, filtering limitations, or workflow context (e.g., 'use this to get coin IDs before calling X') are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_statistics_user_listCInspect
[DISCOVERY] Account List (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { type: string, userName: string, list: { time: number, hashrate: string, reject: string }[] }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| algo | Yes | Algorithm(sha256) | |
| userName | Yes | Mining Account | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden but only provides 'Weight(IP): 5' (rate limiting) and 'USER_DATA' (authentication hint). It fails to disclose if this is read-only (implied by GET prefix but not stated), pagination behavior for the list, or data retention limits. The inline return schema is helpful but belongs in an output_schema field.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but poorly structured: the '[DISCOVERY]' tag is metadata noise, the return JSON is inline and unreadable, and critical information (USER_DATA, Weight) is crammed together without formatting. It wastes space on structural brackets while omitting explanatory prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mining statistics tool with 5 parameters and complex domain concepts (hashrate, rejection rates, algorithms), the description is inadequate. It provides no output schema reference (though it inlines return structure), no error scenarios, no time range constraints, and no link to Binance's mining account requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are adequately documented in the schema itself. The description adds no semantic context beyond the schema (e.g., no explanation of what 'recvWindow' does in the mining context or valid 'algo' values), warranting the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Account List' which conflicts with the tool name suggesting mining statistics. While it returns account-related data with hashrate/reject metrics, the description fails to specify the verb (retrieve/get) or clarify this is for mining statistics history. It does not distinguish from sibling tools like get_sapi_v1_mining_statistics_user_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus sibling mining tools (e.g., user_status vs user_list), no mention of required permissions beyond the cryptic 'USER_DATA' tag, and no explanation of the algo parameter's valid values or when to query specific time windows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_statistics_user_statusCInspect
Statistic List (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { fifteenMinHashRate: string, dayHashRate: string, validNum: number, invalidNum: number, profitToday: { BTC: string, BSV: string, BCH: string }, profitYesterday: { BTC: string, BSV: string, BCH: string }, ... } }.
| Name | Required | Description | Default |
|---|---|---|---|
| algo | Yes | Algorithm(sha256) | |
| userName | Yes | Mining Account | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully communicates the Weight(IP) rate limit (5) and provides the complete return structure including nested profit objects, which is valuable behavioral context. However, it lacks explicit statements about safety (read-only), idempotency, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded, consisting of a single line with the category tag, weight information, and return payload. While information-dense, the raw JSON dump format reduces readability without additional explanatory structure or line breaks.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates effectively by detailing the return structure with specific fields (fifteenMinHashRate, profitToday, etc.). However, it remains incomplete due to missing usage guidelines, sibling differentiation, and explicit behavioral traits (e.g., confirming read-only nature).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, adequately documenting all five parameters (algo, userName, signature, timestamp, recvWindow). The description itself adds no parameter-specific semantics or usage examples beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Statistic List (USER_DATA)' indicating it retrieves mining statistics, and the embedded return structure clarifies specific metrics (hash rates, profits). However, the phrasing 'Statistic List' is ambiguous regarding whether it returns a single status object or a list, and it fails to distinguish from siblings like get_sapi_v1_mining_statistics_user_list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternative mining endpoints such as get_sapi_v1_mining_statistics_user_list or get_sapi_v1_mining_worker_list. It omits prerequisites, usage scenarios, or filtering capabilities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_worker_detailCInspect
Request for Detail Miner List (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { workerName: string, type: string, hashrateDatas: { time: number, hashrate: string, reject: number }[] }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| algo | Yes | Algorithm(sha256) | |
| userName | Yes | Mining Account | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| workerName | Yes | Miner’s name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It successfully communicates the rate limit 'Weight(IP): 5' and the required permission scope '(USER_DATA)', and usefully embeds the return structure JSON inline (compensating for the missing output schema). However, it lacks safety profile disclosure (read-only vs. mutation), error conditions, or data freshness guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense sentence that front-loads the action but buries the return structure as inline JSON. While compact, the lack of formatting makes it harder to parse the return shape, and the 'Weight(IP)' metadata interrupts the flow between purpose and return value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter mining endpoint with no output schema, the description partially compensates by documenting the return structure inline. However, it omits context about what hashrateDatas represents (time-series intervals), the authentication mechanism implied by USER_DATA, and whether the workerName parameter supports wildcards or exact matching only.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is appropriately met. The description does not add semantic clarification beyond the schema (e.g., explaining the relationship between algo and workerName, or signature generation), but it does not need to given the comprehensive property descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it requests 'Detail Miner List' which identifies the domain (mining workers), but the phrasing is ambiguous—it is unclear whether this returns details for a single worker (given the required workerName parameter) or a list of workers. The term 'List' contradicts the singular 'detail' in the tool name, creating confusion about the return cardinality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings like get_sapi_v1_mining_worker_list or get_sapi_v1_mining_statistics_user_status. There is no mention of prerequisites, filtering capabilities, or data granularity differences between this and other mining endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_mining_worker_listBInspect
[DISCOVERY] Request for Miner List (USER_DATA) — Weight(IP): 5 Returns: { code: number, msg: string, data: { workerDatas: { workerId: string, workerName: string, status: number, hashRate: number, dayHashRate: number, rejectRate: number, ... }[], totalNum: number, pageSize: number } }.
| Name | Required | Description | Default |
|---|---|---|---|
| algo | Yes | Algorithm(sha256) | |
| sort | No | sort sequence(default=0)0 positive sequence, 1 negative sequence | |
| userName | Yes | Mining Account | |
| pageIndex | No | Page number, default is first page, start form 1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| sortColumn | No | Sort by( default 1): 1: miner name, 2: real-time computing power, 3: daily average computing power, 4: real-time rejection rate, 5: last submission time | |
| workerStatus | No | miners status(default=0)0 all, 1 valid, 2 invalid, 3 failure |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant behavioral weight by specifying the rate limit 'Weight(IP): 5' and detailing the complete return structure including nested fields (workerId, hashRate, rejectRate, etc.). This disclosure of rate limits and response shape is valuable for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is packed into one dense string mixing metadata tags ([DISCOVERY]), weight info, and a JSON-like return structure. While not excessively verbose, the lack of sentence structure and front-loading of technical metadata before the human-readable purpose hinders readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking a formal output schema, the description compensates by embedding the full response structure showing pagination fields (totalNum, pageSize) and worker data fields. For a 9-parameter query tool, it adequately covers what the agent needs to know about return values and authentication requirements (USER_DATA).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'Mining Account' for userName, 'Algorithm(sha256)' for algo). The description adds no parameter-specific semantics beyond the schema, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a 'Request for Miner List' with 'USER_DATA' scope, specifying it retrieves mining worker information. However, it reads like raw API documentation with technical tags ([DISCOVERY], Weight(IP)) rather than agent-friendly language, slightly reducing clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this list endpoint versus sibling alternatives like get_sapi_v1_mining_worker_detail (for specific workers) or get_sapi_v1_mining_statistics_user_list. The description assumes the user already knows which endpoint fits their use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_nft_history_depositAInspect
Get NFT Deposit History(USER_DATA) — - The max interval between startTime and endTime is 90 days. - If startTime and endTime are not sent, the recent 7 days' data will be returned. Weight(UID): 3000 Returns: { total: number, list: { network: string, txID: number, contractAdrress: string, tokenId: string, timestamp: number }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| limit | No | Default 50, Max 50 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description successfully carries the behavioral disclosure burden by including rate limit information ('Weight(UID): 3000') and documenting the complete return structure with nested object types. This compensates for the missing output schema. However, it does not explicitly confirm the read-only nature of the operation or describe potential error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently packs essential information including purpose, time constraints, rate limits, and return types without excessive verbosity. However, the formatting is suboptimal with inconsistent dashes ('— -') and dense inline JSON for the return structure that reduces readability. Every sentence earns its place, but the presentation could be cleaner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of output schema and annotations, the description adequately compensates by documenting the return structure and including rate limit weight information. The 90-day constraint and pagination defaults provide sufficient operational context. For a historical data retrieval tool with complete input schema coverage, this provides adequate contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% description coverage establishing baseline understanding, the description adds crucial semantic constraints regarding the 90-day maximum interval and 7-day default behavior for time parameters. These constraints provide essential context for valid usage that raw parameter descriptions ('UTC timestamp in ms') do not convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Get NFT Deposit History(USER_DATA)', providing a specific verb (Get) and resource (NFT Deposit History). The inclusion of 'Deposit' clearly distinguishes this from sibling withdrawal/transaction tools like get_sapi_v1_nft_history_withdraw. However, it lacks explicit comparative guidance distinguishing deposits from withdrawals or transactions in the description text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides concrete operational constraints: the 90-day maximum interval between startTime and endTime, and the 7-day default when times are omitted. These constraints help agents construct valid queries. However, it fails to explicitly state when to use this tool versus alternatives like get_sapi_v1_nft_history_withdraw or get_sapi_v1_nft_history_transactions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_nft_history_transactionsAInspect
Get NFT Transaction History (USER_DATA) — - The max interval between startTime and endTime is 90 days. - If startTime and endTime are not sent, the recent 7 days' data will be returned. Weight(UID): 3000 Returns: { total: number, list: { orderNo: string, tokens: { network: string, tokenId: string, contractAddress: string }[], tradeTime: number, tradeAmount: string, tradeCurrency: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| limit | No | Default 50, Max 50 | |
| endTime | No | UTC timestamp in ms | |
| orderType | Yes | 0: purchase order, 1: sell order, 2: royalty income, 3: primary market order, 4: mint fee | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden effectively. It documents the API weight cost (3000), time range constraints, default behaviors, and provides a complete JSON structure of the return value, compensating for the lack of structured output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
All content is relevant (constraints, weight, returns), but the formatting is suboptimal with awkward punctuation ('— -') and a dense inline JSON blob for return values that hurts readability. Information density is high but presentation could be improved.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately compensates by detailing the return structure (total, list items with token details), rate limits, and query constraints. It provides sufficient context for an agent to understand the tool's cost and output shape.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description adds value by specifying the 90-day interval constraint between startTime and endTime, but does not elaborate on parameter formats beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-resource pair ('Get NFT Transaction History') and includes the '(USER_DATA)' scope tag, clearly distinguishing it from sibling tools like get_sapi_v1_nft_history_deposit and get_sapi_v1_nft_history_withdraw. However, it does not explicitly clarify the distinction between 'transactions' (trading activity) versus transfers/deposits in the prose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance through the orderType parameter values (purchase orders, sell orders, royalties) and explicit time window constraints (90-day max interval, 7-day default). However, it lacks explicit 'when to use' guidance comparing this tool against sibling NFT history endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_nft_history_withdrawAInspect
Get NFT Withdraw History (USER_DATA) — - The max interval between startTime and endTime is 90 days. - If startTime and endTime are not sent, the recent 7 days' data will be returned. Weight(UID): 3000 Returns: { total: number, list: { network: string, txID: string, contractAdrress: string, tokenId: string, timestamp: number, fee: number, ... }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| limit | No | Default 50, Max 50 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden, disclosing rate limit costs, the exact return schema structure (total/list with fields), and default behaviors. It implies read-only access via 'Get' and authentication requirements via 'USER_DATA' and signature parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information density is high with no wasted words, packing rate limits, constraints, and return types efficiently. Minor formatting awkwardness (em-dash followed by hyphens) slightly impedes readability but doesn't obscure meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description comprehensively documents the return structure. It also covers rate limiting and temporal constraints, providing complete context for a historical data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% coverage, the description adds crucial semantic information about parameter interactions: the 90-day constraint between startTime/endTime and the 7-day default behavior when omitted, which individual parameter descriptions cannot express.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action ('Get'), resource ('NFT Withdraw History'), and permission context ('USER_DATA'), explicitly distinguishing this tool from siblings like get_sapi_v1_nft_history_deposit and get_sapi_v1_nft_history_transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides specific temporal constraints (90-day max interval, 7-day default window) and rate limit information ('Weight(UID): 3000'), though it lacks explicit guidance on when to prefer this over deposit or transaction history endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_nft_user_get_assetBInspect
Get NFT Asset (USER_DATA) — Weight(UID): 3000 Returns: { total: number, list: { network: string, contractAddress: string, tokenId: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| limit | No | Default 50, Max 50 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It provides the rate limit weight 'Weight(UID): 3000' and the return structure, but omits safety traits (read-only, idempotent), authentication requirements, and error behaviors that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely dense and front-loaded single sentence containing action, scope tag, rate limit weight, and complete return structure. Zero waste; every element serves selection or invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Since no output schema exists, the description compensates by fully documenting the return type ({ total, list... }). It covers pagination inputs via the schema. Minor gap: lacks explicit authentication/authorization context despite 'signature' being a required parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting defaults and constraints (e.g., 'Default 50, Max 50'). The description adds no additional parameter semantics beyond the schema, meeting the baseline expectation for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' and resource 'NFT Asset' plus the '(USER_DATA)' scope tag. It distinguishes from sibling history endpoints (e.g., get_sapi_v1_nft_history_deposit) by specifying the return structure contains current asset details (network, contractAddress, tokenId) rather than transaction history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus the NFT history endpoints or other asset retrieval tools. While the return schema implies this retrieves current holdings (not history), the description lacks explicit 'use this for...' or 'do not use when...' statements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_pay_transactionsAInspect
Get Pay Trade History (USER_DATA) — - If startTime and endTime are not sent, the recent 90 days' data will be returned. - The max interval between startTime and endTime is 90 days. - Support for querying orders within the last 18 months. Weight(UID): 3000 Returns: { code: string, message: string, data: { orderType: string, transactionId: string, transactionTime: number, amount: string, currency: string, walletType: number, ... }[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | default 100, max 100 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting ('Weight(UID): 3000') and provides an inline return structure schema, but fails to explicitly confirm this is a safe read-only operation or describe pagination behavior beyond the limit parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and uses dashes to separate constraint bullet points, but the inline JSON return structure at the end is dense and could be better formatted. All content is relevant with minimal redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately compensates by including rate limit weights and documenting the nested return structure (code, message, data array with transaction fields). It covers the essential behavioral constraints for a historical data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage documenting individual parameters, the description adds critical semantic context about how startTime and endTime interact (default 90-day lookback when omitted, strict 90-day maximum interval between them) that the structured schema cannot express.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get Pay Trade History (USER_DATA)' with a specific verb and resource type. The 'Pay' designation distinguishes it from sibling tools like get_sapi_v1_fiat_payments or get_sapi_v1_margin_all_orders, though it could explicitly clarify this refers to Binance Pay P2P transactions for absolute clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides specific temporal constraints (90-day default window, 90-day max interval, 18-month history limit) that guide usage, but lacks explicit guidance on when to choose this over similar transaction history tools like get_sapi_v1_fiat_orders or get_sapi_v1_c2c_order_match_list_user_order_history.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_portfolio_accountBInspect
Portfolio Margin Account (USER_DATA) — Get the account info 'Weight(IP): 1' Returns: { uniMMR: string, accountEquity: string, actualEquity: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Includes 'Weight(IP): 1' indicating rate limiting costs, and 'USER_DATA' tag implying authentication requirements. Lists sample return fields. However, lacks explicit read-only safety confirmation, cache behavior, or permission requirements beyond the cryptic USER_DATA label.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence packs resource type, auth category, rate limit, and return structure. Slightly awkward punctuation (quotes around Weight(IP)) but efficiently front-loaded with key metadata. No redundant filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking output schema, description compensates by listing representative return fields (uniMMR, accountEquity). Specifies 'Portfolio Margin' clearly among 100+ sibling tools. Missing only usage context for when to prefer this over standard margin or spot account endpoints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (signature, timestamp, recvWindow all documented). Description adds no parameter-specific guidance, which is acceptable given the schema completeness. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies the resource as 'Portfolio Margin Account' and the action as 'Get the account info'. Distinguishes from siblings (get_api_v3_account, get_sapi_v1_margin_account) by specifying 'Portfolio Margin'. Lists specific return fields (uniMMR, accountEquity, actualEquity) clarifying what 'account info' encompasses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the many sibling account endpoints (get_api_v3_account, get_sapi_v1_margin_account, get_sapi_v1_isolated_account, etc.). Does not mention prerequisites or that this requires Portfolio Margin account activation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_portfolio_asset_index_priceAInspect
Query Portfolio Margin Asset Index Price (MARKET_DATA) — Query Portfolio Margin Asset Index Price Weight(IP): - 1 if send asset - 50 if not send asset
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | query parameter: asset (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses critical operational traits: the MARKET_DATA classification implies read-only behavior, and the specific IP weight limits (1 vs 50) disclose rate limiting costs. It does not mention caching or error behaviors, but covers the essential operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose but suffers from redundant phrasing ('Query Portfolio Margin Asset Index Price' appears essentially twice). The em-dash structure is acceptable, but the repetition prevents a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter query tool without an output schema, the description adequately covers the core functionality and operational constraints. However, it could improve by briefly describing the expected return value (e.g., price data structure) to compensate for the missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% coverage with basic descriptions, the description adds significant value by explaining the cost implications of using the asset parameter (IP weight 1 with asset vs 50 without), which is crucial for agent decision-making beyond the schema's type information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the specific resource (Portfolio Margin Asset Index Price) and action (Query), including the MARKET_DATA category. However, it repeats the phrase 'Query Portfolio Margin Asset Index Price' twice, creating minor redundancy without adding sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance through IP weight costs ('- 1 if send asset - 50 if not send asset'), helping agents decide when to include the optional asset parameter. However, it lacks explicit when-to-use scenarios or comparisons to sibling portfolio tools like get_sapi_v1_portfolio_collateral_rate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_portfolio_collateral_rateBInspect
Portfolio Margin Collateral Rate (MARKET_DATA) — Portfolio Margin Collateral Rate. Weight(IP): 50
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable rate-limiting context ('Weight(IP): 50') and categorization ('MARKET_DATA'), but lacks disclosure of return format, authentication requirements, or whether this represents real-time vs cached data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (one sentence), but wastes space with repetition ('Portfolio Margin Collateral Rate...Portfolio Margin Collateral Rate'). The rate limit information is appropriately placed at the end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should ideally characterize the returned collateral rate data. It provides the IP weight which is useful operational context, but leaves the actual response structure and semantics undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to the scoring baseline, this defaults to a score of 4. The description correctly implies no arguments are needed by not mentioning any.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource (Portfolio Margin Collateral Rate) but tautologically restates the tool name after the em-dash. It fails to distinguish this v1 endpoint from the sibling v2 endpoint (get_sapi_v2_portfolio_collateral_rate) or clarify its specific scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, particularly the v2 version. There are no prerequisites, conditions, or exclusions mentioned that would help an agent select this endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_portfolio_interest_historyBInspect
Query Classic Portfolio Margin Negative Balance Interest History (USER_DATA) — Query interest history of negative balance for portfolio margin. Weight(IP): 50
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | Yes | query parameter: asset (string) | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses the rate limit weight (Weight(IP): 50) and authentication type (USER_DATA), but fails to mention safety profile (read-only vs destructive), idempotency, or return data structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the key operation but contains redundancy: 'Query Classic Portfolio Margin Negative Balance Interest History' followed by 'Query interest history of negative balance for portfolio margin.' The weight information is appended efficiently, but the repetition prevents a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter data retrieval tool with no output schema, the description adequately identifies the data domain but fails to describe the return value structure, pagination behavior, or time range constraints implied by startTime/endTime parameters. Minimum viable but notable gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description provides conceptual context that parameters filter 'negative balance' history for a specific 'asset', but does not add syntax details, validation rules, or format examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Query), resource (Interest History), and specific scope (Classic Portfolio Margin Negative Balance). It distinguishes this from regular margin interest history via the 'Classic Portfolio Margin' specifier, though it could more explicitly contrast with sibling tools like get_sapi_v1_margin_interest_history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like get_sapi_v1_margin_interest_history or other portfolio endpoints. No prerequisites mentioned beyond the implicit USER_DATA tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_portfolio_margin_asset_leverageCInspect
Get Portfolio Margin Asset Leverage (USER_DATA) — Weight(IP): 50
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the rate limit cost ('Weight(IP): 50') and authentication type ('USER_DATA'), but fails to describe what the endpoint returns, error conditions, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with no redundant words. However, it is arguably under-specified rather than optimally concise, as it lacks any description of the return value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description should explain the response format or data structure. For a financial trading endpoint, the current description is insufficient to understand what 'Asset Leverage' data is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and 100% schema coverage of those zero parameters, warranting the baseline score of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action (Get) and resource (Portfolio Margin Asset Leverage) clearly, but offers no differentiation from sibling portfolio tools like get_sapi_v1_portfolio_account or get_sapi_v1_portfolio_collateral_rate. It essentially restates the tool name with minimal added context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus other portfolio margin endpoints. The '(USER_DATA)' tag implies authentication requirements but does not explicitly state prerequisites or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_portfolio_pm_loanAInspect
Portfolio Margin Bankruptcy Loan Amount (USER_DATA) — Query Portfolio Margin Bankruptcy Loan Amount. Weight(UID): 500 Returns: { asset: string, amount: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant weight. It successfully discloses the rate limit weight ('Weight(UID): 500') and compensates for the missing output schema by documenting the return structure ('Returns: { asset: string, amount: string }'). The '(USER_DATA)' tag signals authentication requirements. It does not explicitly confirm idempotency or safety, though 'Query' implies read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero waste. It uses a dense, technical format that front-loads the resource name, followed by the action, rate limit weight, and return type. Every segment provides distinct information (resource classification, auth type, behavioral constraint, output shape) in a single sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description adequately compensates by providing the return structure and rate limit weight. For a simple query tool with only authentication parameters, this covers the essential behavioral context. It could be improved by clarifying that this applies specifically to Portfolio Margin accounts versus standard margin or spot accounts.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all three parameters (signature, timestamp, recvWindow). Since the schema fully documents the parameters, the description does not need to add parameter semantics. It meets the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Query') and specific resource ('Portfolio Margin Bankruptcy Loan Amount'). The '(USER_DATA)' tag indicates authentication requirements. However, it does not explicitly differentiate from sibling portfolio margin tools like get_sapi_v1_portfolio_interest_history or explain what constitutes a 'bankruptcy loan' versus other loan types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like the standard loan endpoints (get_sapi_v1_loan_ongoing_orders) or other portfolio margin queries. There are no prerequisites mentioned beyond the implicit USER_DATA tag, and no exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_portfolio_repay_futures_switchAInspect
Get Auto-repay-futures Status (USER_DATA) — Query Auto-repay-futures Status Weight(IP): 30 Returns: { autoRepay: boolean }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses rate limit ('Weight(IP): 30') and return value structure ('Returns: { autoRepay: boolean }') which would otherwise be unknown. 'USER_DATA' indicates authentication requirement. Missing error handling or caching behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Highly efficient single-line format with front-loaded key information (purpose, weight, returns). No wasted words. Slightly dense formatting using em-dashes could be slightly more readable, but every segment earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description compensates by documenting the return value ('{ autoRepay: boolean }'). Rate limiting information is included. For a simple status query tool with 3 parameters, this provides sufficient context despite no structured output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear descriptions for timestamp, signature, and recvWindow. The description does not add parameter-specific semantics beyond the schema, which is acceptable given the complete schema coverage. Baseline score appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') and resource ('Auto-repay-futures Status'). The 'USER_DATA' tag indicates authentication scope. The naming clearly distinguishes from sibling 'post_sapi_v1_portfolio_repay_futures_switch' (which likely sets the status) by using GET vs POST prefix.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternative guidance provided. However, usage is implied through the HTTP method convention (GET for querying) and the distinction from the POST sibling is clear from naming. 'USER_DATA' hints at permission requirements but lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_rebate_tax_queryAInspect
Get Spot Rebate History Records (USER_DATA) — - The max interval between startTime and endTime is 90 days. - If startTime and endTime are not sent, the recent 7 days' data will be returned. - The earliest startTime is supported on June 10, 2020 Weight(UID): 3000 Returns: { status: string, type: string, code: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | default 1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and delivers well: it discloses the time window limitations, default behavior when parameters are omitted, rate limit weight (3000), and hints at the return structure with field examples (status, type, code). This gives the agent crucial operational context beyond just 'getting data'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The information density is appropriate, but the formatting is awkward with the em-dash followed by hyphens ('— -') creating visual clutter. The return value description is also truncated with '...', suggesting incomplete information rather than intentional conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a query tool with 6 parameters and no output schema, the description adequately covers operational constraints, rate limits, and provides a partial glimpse of return fields. It successfully communicates the critical time-based limitations necessary for successful invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage, the description adds valuable semantic context about parameter interaction: the 90-day constraint between startTime/endTime and the 7-day default behavior when these are omitted. This goes beyond the schema's simple 'UTC timestamp in ms' definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get Spot Rebate History Records' with a specific USER_DATA tag, distinguishing it from other transaction history tools (like dividends, deposits, or trades) in the sibling list. However, it could slightly better clarify what constitutes a 'rebate' in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides important operational constraints (90-day max interval, 7-day default, June 10 2020 earliest date), which imply when the tool can be used, but lacks explicit guidance on when to choose this over similar financial history tools like get_sapi_v1_asset_asset_dividend or get_sapi_v1_asset_transfer.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_accountCInspect
Simple Account (USER_DATA) — Weight(IP): 150 Returns: { totalAmountInBTC: string, totalAmountInUSDT: string, totalFlexibleAmountInBTC: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully documents the rate limit weight (150) and return structure, but fails to declare safety characteristics (read-only vs destructive), authentication requirements beyond the signature parameter, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single dense line packs endpoint category, auth type, weight, and return structure without redundancy. However, the metadata-heavy format (mixing weight and returns in one line) sacrifices readability for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 3-parameter retrieval tool with no output schema, the description adequately compensates by documenting the return structure inline. It lacks richness regarding Simple Earn product context, but covers the essential technical contract (weight, returns).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting all three parameters (signature, timestamp, recvWindow). The description adds no additional parameter context beyond the schema, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource ('Simple Account') and shows it returns aggregate totals (totalAmountInBTC, totalFlexibleAmountInBTC), which distinguishes it from sibling position/history endpoints. However, it lacks an explicit action verb like 'Retrieve' or 'Get', relying on the tool name prefix to imply the operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this aggregate account endpoint versus specific siblings like get_sapi_v1_simple_earn_flexible_position or get_sapi_v1_simple_earn_locked_list. No prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_history_collateral_recordBInspect
Get Collateral Record (USER_DATA) — Weight(IP): 150 Returns: { rows: { amount: string, productId: string, asset: string, createTime: number, type: string, productName: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| productId | No | query parameter: productId (string) | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses rate limit weight (150), authentication type (USER_DATA), and return structure with pagination (rows array and total count). This compensates for missing output schema by showing the data shape.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence front-loaded with action and resource. Efficiently packs rate limit, auth context, and return structure. Slightly dense formatting but no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists; description partially compensates by documenting return fields (amount, productId, asset, etc.) and pagination. Missing semantic context on what collateral records represent and how they differ from other history types. Adequate but not comprehensive for an 8-parameter financial tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with adequate field descriptions (e.g., 'UTC timestamp in ms', 'Current querying page'). Description provides no additional parameter semantics beyond schema, meeting baseline for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with resource 'Collateral Record' and distinguishes from sibling simple_earn history tools (redemption, rewards, subscription) by specifying the record type. Includes 'USER_DATA' context indicating authenticated access. Could improve by explaining what collateral represents in Simple Earn context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus sibling alternatives like get_sapi_v1_simple_earn_flexible_history_redemption_record or rewards_record. No prerequisites mentioned beyond implicit USER_DATA tag. No exclusion criteria provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_history_rate_historyBInspect
Get Rate History (USER_DATA) — Weight(IP): 150 Returns: { rows: { productId: string, asset: string, annualPercentageRate: string, time: number }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| productId | Yes | query parameter: productId (string) | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully communicates the rate limit weight (150) and authentication scope (USER_DATA), and documents the return structure inline. However, it lacks details on pagination behavior, error scenarios, or how historical the data is (e.g., max lookback period).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the essential action, but suffers from poor structure—cramming the weight, auth tag, and full return schema into a single run-on sentence. The return type documentation, while useful, is dense and would benefit from formatting or separation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 8 parameters and lack of separate output schema, the description adequately compensates by documenting the return structure inline. However, it lacks domain context (e.g., explaining that this tracks APR changes over time for flexible savings products) and doesn't clarify the relationship between productId and the returned asset field.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 8 parameters including pagination controls ('current', 'size') and time filters. Since the schema is self-documenting, the baseline score applies. The description itself adds no parameter guidance, but none is needed given the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get') and resource ('Rate History') for Simple Earn Flexible products. The inline return schema clarifies that 'Rate' refers to annualPercentageRate. However, it does not differentiate from sibling history endpoints (redemption_record, subscription_record, etc.), leaving it unclear whether this returns transaction history or interest rate history without examining the return fields.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the other Simple Earn history endpoints (e.g., rewards_record, subscription_record). It mentions 'USER_DATA' indicating authentication is required, but does not specify prerequisites like needing a valid API key or when this data is preferable to other rate queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_history_redemption_recordBInspect
Get Flexible Redemption Record (USER_DATA) — Weight(IP): 150 Returns: { rows: { amount: string, asset: string, time: number, projectId: string, redeemId: number, destAccount: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| redeemId | No | query parameter: redeemId (string) | |
| productId | No | query parameter: productId (string) | |
| startTime | No | UTC timestamp in ms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the rate limit weight (150) and authentication type (USER_DATA), and provides an inline return schema showing fields like amount, asset, redeemId, and destAccount. However, it lacks details on pagination behavior, time range limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely dense but efficient, packing the purpose, auth type, rate limit, and return structure into a single line. While it could benefit from formatting breaks between metadata and return schema, there is no extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by providing the return structure inline (rows array with specific fields, total count). However, it fails to explain domain concepts (what constitutes a 'redemption' in Simple Earn) or provide behavioral context beyond the technical return shape.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (e.g., 'UTC timestamp in ms' for time fields), so the description does not need to compensate. The description adds no additional parameter semantics, meeting the baseline expectation when the schema is fully documented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Get Flexible Redemption Record' which clearly identifies the action (Get) and resource (Flexible Redemption Record). It distinguishes from siblings (e.g., locked products, subscription/rewards records) via the specific terms 'Flexible' and 'Redemption', though it assumes the user knows what 'redemption' means in the Simple Earn context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like get_sapi_v1_simple_earn_flexible_history_subscription_record or get_sapi_v1_simple_earn_locked_history_redemption_record. No mention of prerequisites beyond the implicit (USER_DATA) tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_history_rewards_recordAInspect
Get Flexible Rewards History (USER_DATA) — Weight(IP): 150 Returns: { rows: { asset: string, rewards: string, projectId: string, type: string, time: number }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | "BONUS", "REALTIME", "REWARDS" | |
| asset | No | query parameter: asset (string) | |
| endTime | No | UTC timestamp in ms | |
| productId | No | query parameter: productId (string) | |
| startTime | No | UTC timestamp in ms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully provides the rate limit weight (150) and documents the complete return structure including the 'rows' array schema and 'total' count for pagination. However, it omits explicit confirmation of read-only safety, pagination limits, or data retention windows.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that front-loads the action ('Get Flexible Rewards History'), immediately qualifies the authentication scope ('USER_DATA'), states the rate limit cost, and defines the return structure. Every element serves a necessary function with no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the absence of a structured output schema in the metadata, the description compensates by fully documenting the JSON return shape. It covers the essential behavioral context (rate limit, auth type) for a query tool, though it could be improved by noting pagination mechanics or maximum queryable time ranges.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (including enum values for 'type'), establishing a baseline of 3. The description does not add semantic context beyond the schema (e.g., explaining the relationship between productId and asset, or time range granularity), but it does not need to given the comprehensive schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Get' and resource 'Flexible Rewards History', clearly identifying the target data. The 'Flexible' qualifier distinguishes it from the 'locked' sibling tool, while 'Rewards' differentiates it from 'subscription', 'redemption', and 'collateral' history variants in the same API family.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the specific resource name and includes 'Weight(IP): 150' for rate-limit planning, but provides no explicit guidance on when to choose this over the 'locked' rewards history or other flexible history types (subscription vs redemption), nor does it mention prerequisites like API key permissions beyond the parenthetical 'USER_DATA' tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_history_subscription_recordBInspect
Get Flexible Subscription Record (USER_DATA) — Weight(IP): 150 Returns: { rows: { amount: string, asset: string, time: number, purchaseId: number, productId: string, type: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| productId | No | query parameter: productId (string) | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| purchaseId | No | query parameter: purchaseId (string) | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description effectively compensates by disclosing rate limit weight '150', authentication requirement 'USER_DATA', and detailed return structure with row schema and pagination 'total'. Missing explicit read-only safety declaration, though implied by 'Get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Dense, front-loaded format efficiently packs the action, resource, weight cost, and complete return structure into minimal space. Technical notation is appropriate for financial API domain with no redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a query tool given inline documentation of the return structure (rows array with amount, asset, time fields), but lacks domain context explaining 'Simple Earn Flexible' products and omits explicit pagination behavior guidance despite supporting current/size parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with all 10 parameters (timestamp, signature, asset, productId, etc.) fully documented. Description adds no parameter semantics beyond what the schema provides, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Get' and resource 'Flexible Subscription Record', distinguishing from siblings like the 'Locked' variant and other history types (redemption, rewards, collateral). However, it omits 'Simple Earn' domain context present in the tool name and doesn't clarify that 'Subscription Record' refers to historical subscription transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this tool versus alternatives such as get_sapi_v1_simple_earn_locked_history_subscription_record for locked products, or get_sapi_v1_simple_earn_flexible_position for current holdings. Agent must infer usage solely from the name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_listAInspect
[DISCOVERY] Get Simple Earn Flexible Product List (USER_DATA) — Get available Simple Earn flexible product list Weight(IP): 150 Returns: { rows: { asset: string, latestAnnualPercentageRate: string, tierAnnualPercentageRate: { 0-5BTC: number, 5-10BTC: number }, airDropPercentageRate: string, canPurchase: boolean, canRedeem: boolean, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It successfully discloses rate limiting ('Weight(IP): 150') and provides a detailed inline return structure showing fields like tierAnnualPercentageRate and canPurchase/canRedeem booleans. This compensates well for the lack of output schema. However, it doesn't mention error conditions or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense with zero filler text. Every element serves a purpose: categorization ([DISCOVERY]), authentication level (USER_DATA), rate limit (Weight), and return type. However, it is formatted as a dense single-line string that hinders readability—structuring the Returns block with formatting would improve clarity without reducing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description admirably includes the full return structure with field types and examples (tierAnnualPercentageRate breakdown, etc.). The input schema is fully documented separately. For a product listing endpoint, this provides sufficient context to understand what data will be returned and how to paginate through results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (all 6 parameters have descriptions), the baseline is 3. The description does not add additional semantic context about parameters (e.g., explaining that 'asset' filters by specific cryptocurrency, or that 'current' handles pagination), but it doesn't need to given the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get') and resource ('Simple Earn Flexible Product List'). It distinguishes from siblings by specifying 'flexible' (vs 'locked' products in get_sapi_v1_simple_earn_locked_list) and 'available' (vs historical records in the history_* siblings). The '[DISCOVERY]' prefix and 'USER_DATA' tag provide additional API context, though these could be clearer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus alternatives (e.g., when to use flexible vs locked products). No mention of prerequisites, though 'USER_DATA' implies authentication is required. The description lacks workflow context or pointers to related subscription/redemption tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_personal_left_quotaBInspect
Get Flexible Personal Left Quota (USER_DATA) — Weight(IP): 150 Returns: { leftPersonalQuota: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| productId | Yes | query parameter: productId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable technical context: the USER_DATA tag implies authentication requirements, Weight(IP): 150 discloses rate limiting costs, and it documents the return structure '{ leftPersonalQuota: string }' which compensates for the missing output schema. However, it omits details about error states or the string format (decimal number).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single-line description with no redundant words. Technical metadata (weight, return type) is efficiently appended. However, the dense packing of '— Weight(IP): 150 Returns:' slightly hinders readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a quota-checking endpoint, the description adequately covers the return value (since no output schema exists) but lacks business context explaining what 'leftPersonalQuota' represents (e.g., maximum remaining subscription amount). Given the complexity of financial APIs and 100% input schema coverage, this is minimally viable but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (productId, signature, timestamp, recvWindow are all documented). Since the schema is complete, the description does not need to add parameter semantics to achieve baseline adequacy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('Flexible Personal Left Quota'), clearly indicating it retrieves remaining subscription limits for flexible earn products. It distinguishes from siblings like the 'locked' version by including 'Flexible' in the description, though it assumes familiarity with 'Simple Earn' terminology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., get_sapi_v1_simple_earn_locked_personal_left_quota) or prerequisites. It does not indicate that this should be called before subscribing to check available limits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_positionAInspect
Get Flexible Product Position (USER_DATA) — Weight(IP): 150 Returns: { rows: { totalAmount: string, tierAnnualPercentageRate: { 0-5BTC: number, 5-10BTC: number }, latestAnnualPercentageRate: string, yesterdayAirdropPercentageRate: string, asset: string, airDropAsset: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| productId | No | query parameter: productId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and succeeds by disclosing the IP rate limit weight (150) and providing an inline return schema showing nested fields like tierAnnualPercentageRate and totalAmount. It does not, however, clarify caching behavior or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence format is information-dense and front-loaded with the action. The inline JSON return structure, while somewhat dense, efficiently compensates for the missing output_schema. No wasted words, though formatting could be cleaner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately compensates by detailing the return structure including the rows array and total count. It identifies the endpoint as USER_DATA (authenticated). It misses explicit pagination guidance but the schema covers the mechanics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds no parameter-specific context beyond what the schema already documents (e.g., pagination defaults, timestamp format). It neither enhances nor contradicts the parameter definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' and resource 'Flexible Product Position', distinguishing it from sibling 'locked_position' tools. The '(USER_DATA)' tag clarifies this retrieves private user holdings. However, it could explicitly define 'position' as the user's current subscription balances for clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance is provided versus alternatives like 'simple_earn_flexible_list' (available products) or 'simple_earn_locked_position' (fixed-term products). While the name implies usage, the description lacks guidance on when to query this versus subscribing/redeeming.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_flexible_subscription_previewCInspect
Get Flexible Subscription Preview (USER_DATA) — Weight(IP): 150 Returns: { totalAmount: string, rewardAsset: string, airDropAsset: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | query parameter: amount (number) | |
| productId | Yes | query parameter: productId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of disclosing behavioral traits and partially succeeds by specifying the rate limit 'Weight(IP): 150' and the return structure with fields like totalAmount and rewardAsset. However, it fails to clarify whether this is a read-only calculation, if it has side effects, or whether the results are cached or real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently packs multiple data points—purpose, authentication type, rate weight, and return fields—into a single dense line without redundant phrasing. While highly compact and front-loaded with technical metadata, the formatting using em-dashes and ellipses makes parsing slightly harder than necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description compensates for the missing output schema by documenting the return object's key fields (totalAmount, rewardAsset, airDropAsset). However, it omits explanation of the preview's business purpose (simulating subscription outcomes) and lacks annotation coverage for destructive hints or idempotency that would help an agent understand the safety profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score applies. The description adds no semantic context beyond the schema's generic 'query parameter' labels for productId and amount, leaving the business meaning of these parameters (e.g., that amount represents the proposed subscription quantity) implied rather than explicit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Get Flexible Subscription Preview' which identifies the core action and resource, but offers minimal elaboration on what constitutes a 'subscription preview' (e.g., reward calculation simulation). It distinguishes the endpoint as USER_DATA requiring authentication, but does not differentiate from sibling tools like the actual subscription endpoint (post_sapi_v1_simple_earn_flexible_subscribe) beyond the implied 'Get' versus 'Post' semantics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided regarding when to use this tool versus alternatives such as the actual subscription endpoint or other preview tools. The description lacks prerequisites (beyond implicit signature requirement) or use-case scenarios indicating this should be called before committing to a subscription.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_locked_history_redemption_recordBInspect
Get Locked Redemption Record (USER_DATA) — Weight(IP): 150 Returns: { rows: { positionId: string, redeemId: number, time: number, asset: string, lockPeriod: string, amount: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| redeemId | No | query parameter: redeemId (string) | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| positionId | No | query parameter: positionId (string) | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It successfully discloses the rate limit weight ('Weight(IP): 150') and provides the return structure schema inline (rows array with positionId, redeemId, etc., plus total count). However, it lacks information on read-only safety, pagination limits, or data retention policies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely dense single-line format front-loaded with the action and resource. The technical metadata (weight) and return structure are included efficiently, though the raw JSON-like return format is slightly hard to parse. No redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 10-parameter authenticated endpoint with no output schema or annotations, the description partially compensates by documenting the return structure and rate limits. However, it omits business context (what constitutes a redemption record), pagination behavior details, and error scenarios expected for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are fully documented in the schema itself (e.g., 'UTC timestamp in ms', 'Default:10 Max:100'). The description adds no additional parameter semantics, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('Locked Redemption Record'), including the '(USER_DATA)' tag indicating authentication scope. The terms 'Locked' and 'Redemption' inherently distinguish this from siblings like flexible redemption or locked subscription records, though it doesn't explain the conceptual differences between them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus related endpoints like the flexible redemption history or locked subscription history. No mention of prerequisites such as requiring active locked earn positions or authentication requirements beyond the implicit '(USER_DATA)' label.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_locked_history_rewards_recordBInspect
Get Locked Rewards History (USER_DATA) — Weight(IP): 150 Returns: { rows: { positionId: string, time: number, asset: string, lockPeriod: string, amount: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| positionId | No | query parameter: positionId (string) | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Documents specific behavioral traits not in annotations: 'Weight(IP): 150' indicates rate limit cost, 'USER_DATA' signals authentication requirement, and the inline JSON structure defines the return format including the 'total' field for pagination. However, it fails to explain pagination mechanics (cursor vs offset) or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely dense single-sentence format packs endpoint type, weight, and return structure efficiently. While front-loaded with the action, the inline JSON return type reduces readability slightly, though every element serves a necessary documentation purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides return structure documentation (compensating for missing output schema) and rate limit information. However, lacks domain context explaining what 'Locked Simple Earn' products are, and omits guidance on interpreting the 'total' field for result pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage across all 8 parameters, the schema itself carries the semantic burden. The description adds no parameter-specific guidance (e.g., valid formats for 'asset' or interaction between 'startTime' and 'endTime'), meeting the baseline expectation for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get') and resource ('Locked Rewards History'), distinguishing it from redemption history via the specific term 'Rewards'. However, it does not explicitly differentiate from the 'flexible' equivalent sibling or explain what 'Simple Earn' encompasses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the flexible rewards history or locked redemption history alternatives. The USER_DATA tag implies authentication requirements but lacks explicit permission prerequisites or rate limit warnings beyond the weight value.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_locked_history_subscription_recordBInspect
Get Locked Subscription Record (USER_DATA) — Weight(IP): 150 Returns: { rows: { positionId: string, purchaseId: number, projectId: string, time: number, asset: string, amount: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| purchaseId | No | query parameter: purchaseId (string) | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses the rate limit weight (150) and authentication level (USER_DATA), and documents the return structure inline since no output schema exists. However, without annotations, it fails to confirm this is read-only or describe error conditions, data freshness, or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely dense single-line format front-loads the action and resource. Information-to-word ratio is high, though the inline JSON return structure slightly reduces readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter tool with no annotations, the description provides the essential return structure and rate limit information. However, it lacks usage context, safety declarations, and detailed explanations of the returned fields beyond their types.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter semantics are adequately handled by the schema itself. The description adds no parameter-specific context beyond what the schema provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies the action (Get) and resource (Locked Subscription Record), distinguishing it from flexible earn products via the 'Locked' specifier. However, it does not explicitly clarify that this retrieves historical subscription records versus active positions or available products.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like get_sapi_v1_simple_earn_locked_list or get_sapi_v1_simple_earn_locked_position. No mention of prerequisites, pagination strategies, or filtering best practices.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_locked_listBInspect
[DISCOVERY] Get Simple Earn Locked Product List (USER_DATA) — Weight(IP): 150 Returns: { rows: { projectId: string, detail: { asset: string, rewardAsset: string, duration: number, renewable: boolean, isSoldOut: boolean, apr: string, ... }, quota: { totalPersonalQuota: string, minimum: string } }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure. It usefully includes the return structure schema showing fields like projectId, asset, and apr, and mentions the rate limit weight. However, it fails to explicitly state authentication requirements, pagination behavior, or that this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense line that efficiently packs the operation type, endpoint category, weight, and return structure. While the inline JSON return format is slightly difficult to parse, there is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 6 parameters with full schema coverage and no output schema, the description compensates by providing the return structure inline. However, it lacks explanation of the 'Simple Earn' domain context, authentication flow requirements, and pagination handling for the 'total' and 'rows' returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (size, asset, current, signature, timestamp, recvWindow all documented). The description does not add parameter-specific semantics, but with full schema coverage, it meets the baseline expectation without penalty.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get') and resource ('Simple Earn Locked Product List'), including a '[DISCOVERY]' tag that hints at its function. It distinguishes from 'Flexible' siblings by specifying 'Locked', though it assumes domain knowledge of what 'Locked' products are without explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description includes '(USER_DATA)' indicating authentication requirements and 'Weight(IP): 150' for rate limiting, it lacks explicit guidance on when to use this versus similar tools like 'get_sapi_v1_simple_earn_locked_position' or the 'flexible' variant.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_locked_personal_left_quotaBInspect
Get Locked Personal Left Quota (USER_DATA) — Weight(IP): 150 Returns: { leftPersonalQuota: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| projectId | Yes | query parameter: projectId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the rate limit weight 'Weight(IP): 150' and indicates authentication requirements via '(USER_DATA)', adding necessary behavioral context absent from annotations. It also documents the return structure '{ leftPersonalQuota: string }', though it omits safety implications, error behaviors, and the semantic meaning of the quota value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently packs technical metadata—including action, auth type, rate limit, and return type—into a single sentence without redundant filler. Every clause serves a distinct informational purpose, though the density sacrifices some explanatory context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description partially compensates by specifying the return structure, but it fails to explain domain concepts like what 'leftPersonalQuota' represents or how 'projectId' relates to Simple Earn projects. The description meets minimum viability but leaves significant gaps in contextual understanding for an agent unfamiliar with Binance Simple Earn products.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameters are fully documented in the schema itself, establishing a baseline score. The description does not add additional parameter semantics or usage examples beyond what the schema provides, which is acceptable given the comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' and identifies the resource as 'Locked Personal Left Quota', clearly indicating it retrieves remaining quota for locked Simple Earn products. While it does not explicitly differentiate from the flexible alternative (get_sapi_v1_simple_earn_flexible_personal_left_quota), the inclusion of 'Locked' in both the name and description provides sufficient resource identification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as the flexible quota endpoint, nor does it mention prerequisites like obtaining a valid projectId from the locked product list. It lacks explicit conditions or workflow context for proper invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_locked_positionCInspect
Get Locked Product Position (USER_DATA) — Weight(IP): 150 Returns: { rows: { positionId: string, parentPositionId: string, projectId: string, asset: string, amount: string, purchaseTime: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Default:10 Max:100 | |
| asset | No | query parameter: asset (string) | |
| current | No | Current querying page. Start from 1. Default:1 | |
| projectId | No | query parameter: projectId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| positionId | No | query parameter: positionId (string) | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It notes 'USER_DATA' implying authenticated access, and includes 'Weight(IP): 150' indicating rate limit cost, but does not disclose if the operation is read-only, idempotent, or safe. The return structure is documented but not explained (e.g., what 'total' represents).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but poorly structured—cramming the return type JSON into the same sentence as the operation name and weight limit. While not verbose, the density makes it harder to parse, and 'Weight(IP): 150' is implementation detail that doesn't aid semantic understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter query tool with no annotations, the description provides the basic return structure inline, compensating somewhat for the lack of output schema. However, it lacks guidance on pagination behavior (how to iterate through 'total' rows) or filtering logic (whether asset/projectId filters are mutually exclusive).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema itself documents all 8 parameters (asset, projectId, positionId, current, size, etc.). The description adds no additional semantic context beyond the schema's 'query parameter' labels and default values, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Get Locked Product Position' with the specific resource type (USER_DATA), clearly identifying it retrieves user's locked Simple Earn positions. However, it does not explicitly differentiate from sibling 'get_sapi_v1_simple_earn_locked_list' (available products) vs this tool (user holdings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus flexible positions, when to use pagination parameters (current/size), or prerequisites. The 'USER_DATA' tag implies authentication is required, but no explicit guidance on credentials or rate limit implications of 'Weight(IP): 150'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_locked_set_redeem_optionCInspect
Set Locked Product Redeem Option(USER_DATA) — Set redeem option for Locked product Weight(IP): 50 Returns: { success: boolean }.
| Name | Required | Description | Default |
|---|---|---|---|
| redeemTo | No | SPOT,FLEXIBLE, default FLEXIBLE | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| positionId | Yes | query parameter: positionId (string) | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit 'Weight(IP): 50' and return type '{ success: boolean }', but omits side effects, persistence details, or authentication requirements beyond the 'USER_DATA' label.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but contains redundancy: 'Set Locked Product Redeem Option... Set redeem option for Locked product' repeats the same idea. However, it efficiently appends rate limit and return value information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and lack of output schema, the description adequately covers the return shape and basic operation. However, it lacks business context about what 'setting redeem option' entails for the user's locked positions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for all 5 parameters (e.g., redeemTo options 'SPOT,FLEXIBLE'). The description adds no parameter-specific context, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Set') and resource ('Locked Product Redeem Option'), but 'Redeem Option' is jargon-heavy without explaining it configures the redemption destination (SPOT vs FLEXIBLE). It fails to differentiate from the sibling redemption tool 'post_sapi_v1_simple_earn_locked_redeem'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this configuration tool versus actually redeeming, nor prerequisites like requiring an existing locked position. The description only states what the tool does, not when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_simple_earn_locked_subscription_previewCInspect
Get Locked Subscription Preview (USER_DATA) — Weight(IP): 150
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | query parameter: amount (number) | |
| projectId | Yes | query parameter: projectId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| autoSubscribe | No | true or false, default true. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses authentication type (USER_DATA) and rate limit cost (Weight(IP): 150), but fails to describe what data the preview returns, whether the operation is idempotent, or what the side effects are (though 'preview' implies read-only).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (one sentence fragment), but underspecified for a 6-parameter financial tool. While there is no verbosity, the lack of front-loaded explanatory content makes it insufficiently informative rather than efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, no output schema, and no annotations, the description should explain return values and preview semantics. It provides neither, leaving the agent unaware of what the preview calculation includes or how to interpret the results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting all 6 parameters including 'autoSubscribe' and 'recvWindow'. The description adds no additional semantic context for parameters (e.g., that 'projectId' refers to a Simple Earn product), warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Get') and resource ('Locked Subscription Preview'), and distinguishes from the flexible product sibling via the 'Locked' keyword. However, it fails to explain what the preview actually returns (APY, rewards, terms) or what 'Simple Earn' refers to, leaving functional gaps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this preview tool versus the actual subscription endpoint (post_sapi_v1_simple_earn_locked_subscribe). The USER_DATA tag implies authentication requirements but does not explain the workflow of previewing before subscribing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_spot_delist_scheduleCInspect
Get symbols delist schedule for spot (MARKET_DATA) — Get symbols delist schedule for spot Weight(IP): 100
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses rate limiting ('Weight(IP): 100'), which is valuable behavioral context not present in the schema. However, it lacks disclosure of safety characteristics (read-only vs destructive), error conditions, or return format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is unnecessarily repetitive: 'Get symbols delist schedule for spot (MARKET_DATA) — Get symbols delist schedule for spot Weight(IP): 100' essentially states the same thing twice with different parentheticals. This wastes space that could clarify auth requirements or return values.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval endpoint with 3 well-documented parameters and no output schema, the description is minimally adequate. The rate limit information helps, but the lack of annotations means the description should have clarified authentication requirements and read-only nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting all three parameters (signature, timestamp, recvWindow). The description adds no additional semantic context for parameters beyond what the schema already provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'symbols delist schedule for spot', distinguishing it from the sibling 'get_sapi_v1_margin_delist_schedule' by specifying the market type. However, it loses a point for being repetitive and including metadata tags (MARKET_DATA) without explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus the margin delist schedule endpoint or other alternatives. No mention of prerequisites (though the required 'signature' parameter implies authentication is needed).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_futures_accountBInspect
Detail on Sub-account's Futures Account (For Master Account) — Weight(IP): 10 Returns: { email: string, asset: string, assets: { asset: string, initialMargin: string, maintenanceMargin: string, marginBalance: string, maxWithdrawAmount: string, openOrderInitialMargin: string, ... }[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | query parameter: email (string) | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit weight ('Weight(IP): 10') and provides a detailed inline output schema. However, it lacks explicit statements about safety (read-only nature), error conditions, or authentication requirements beyond the parameter schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, information-dense sentence mixing purpose, rate limits, and return structure. While not wasteful, the concatenation of disparate concerns (weight and return values) makes it slightly awkward and harder to parse than structured multi-sentence descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema annotations, the inclusion of the return structure in the description is valuable. However, the description fails to differentiate from the v2 endpoint or explain the specific use case for 'detail' versus 'summary' data, leaving contextual gaps given the rich sibling tool environment.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing adequate documentation for all four parameters (email, signature, timestamp, recvWindow). The description does not add semantic context beyond the schema (e.g., explaining that 'email' refers to the sub-account identity), warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the operation ('Detail'), resource ('Sub-account's Futures Account'), and authorization context ('For Master Account'). It implicitly distinguishes from the sibling 'summary' endpoint via the 'Detail' label, though it does not clarify differences between v1 and v2 variants.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The only guidance provided is the parenthetical '(For Master Account)', indicating required permissions. There is no explicit guidance on when to use this versus the 'summary' endpoint, the v2 variant, or prerequisites like required API permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_futures_account_summaryAInspect
Summary of Sub-account's Futures Account (For Master Account) — Weight(IP): 1 Returns: { totalInitialMargin: string, totalMaintenanceMargin: string, totalMarginBalance: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds valuable rate limiting info ('Weight(IP): 1') and previews return structure (margin fields), but omits safety profile (read-only nature), error handling behavior, or idempotency details expected for a mutation-free tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence front-loaded with purpose, followed by metadata (weight) and return structure. Efficient with no filler, though slightly cramped formatting of the return object JSON.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter GET endpoint with no output schema, the description adequately covers intent, access requirements, rate cost, and return value shape. Could improve by noting authentication requirements or master account privileges explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (signature, timestamp, recvWindow all documented), establishing baseline 3. The description adds no parameter syntax details, but none are needed given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it retrieves a 'Summary of Sub-account's Futures Account' and specifies '(For Master Account)' which clarifies the scope. Uses 'Summary' to distinguish from the sibling 'get_sapi_v1_sub_account_futures_account' (likely detailed view), though it doesn't explicitly contrast with the v2 variant.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides the constraint '(For Master Account)' indicating required access level, but lacks explicit guidance on when to choose this over 'get_sapi_v2_sub_account_futures_account_summary' or the non-summary variant. No prerequisites or error conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_futures_internal_transferBInspect
Sub-account Futures Asset Transfer History (For Master Account) — Weight(IP): 1 Returns: { success: boolean, futuresType: number, transfers: { from: string, to: string, asset: string, qty: string, tranId: number, time: number }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| Yes | Sub-account email | ||
| limit | No | Default value: 50, Max value: 500 | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| futuresType | Yes | 1:USDT-margined Futures, 2: Coin-margined Futures |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant burden by disclosing the rate limit 'Weight(IP): 1' and providing the complete return schema structure inline (success, futuresType, transfers array). It does not mention pagination behavior or error scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the resource name, but crams the return schema into a dense JSON blob at the end. While efficient, the structure could be clearer with separation between description and return value documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 9 parameters and lack of formal output schema, the description compensates well by documenting the return structure inline. It covers the Master Account constraint and rate limiting, though it omits guidance distinguishing it from similar transfer endpoints in the sibling list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (email, futuresType values, timestamps, etc.), establishing a baseline of 3. The description adds no additional parameter context beyond what the schema already provides, nor does it explain parameter relationships (e.g., startTime/endTime pairing).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource (Sub-account Futures Asset Transfer History) and scope (For Master Account). It implicitly distinguishes from the POST sibling tool by using 'History' to indicate retrieval, though it lacks an explicit verb like 'Retrieve' or 'Get'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes '(For Master Account)' indicating the required perspective, but provides no guidance on when to use this versus the POST execution variant (post_sapi_v1_sub_account_futures_internal_transfer) or versus other transfer history endpoints like get_sapi_v1_futures_transfer.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_futures_position_riskCInspect
Futures Position-Risk of Sub-account (For Master Account) — Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only discloses the rate limit cost ('Weight(IP): 10') but fails to indicate this is a read-only operation, what risk metrics are returned, or whether the data is real-time versus cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient phrase without redundancy. However, it borders on being underspecified rather than optimally concise—front-loading the IP weight is useful, but the technical term 'position-risk' warrants a brief explanatory clause.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of futures trading risk metrics and the absence of both annotations and an output schema, the description is insufficient. It should explain what position risk data includes (e.g., liquidation price, leverage, margin level) to help the agent interpret results or decide between this and account summary endpoints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (email, timestamp, signature, recvWindow all documented), establishing a baseline of 3. The description adds no additional parameter context (e.g., explaining signature generation or typical recvWindow values), but does not need to compensate for schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the specific resource (Futures Position-Risk), target entity (Sub-account), and execution context (For Master Account). However, it assumes familiarity with what 'position-risk' entails (liquidation metrics, margin ratios) without clarifying the specific data returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The parenthetical '(For Master Account)' provides minimal context about authorization scope, but there is no guidance on when to use this versus the v2 variant (get_sapi_v2_sub_account_futures_position_risk) or other futures account endpoints. No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_listAInspect
[DISCOVERY] Query Sub-account List (For Master Account) — Weight(IP): 1 Returns: { subAccounts: { email: string, isFreeze: boolean, createTime: number, isManagedSubAccount: boolean, isAssetManagementSubAccount: boolean }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| No | Sub-account email | ||
| limit | No | Default 1; max 200 | |
| isFreeze | No | query parameter: isFreeze ("true" | "false") | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight ('Weight(IP): 1') and the return data structure inline, but fails to mention idempotency, pagination behavior beyond parameter existence, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-line format is dense but efficient, front-loading the [DISCOVERY] tag, action verb, scope, weight, and return type. The inline JSON return structure is slightly hard to parse but serves a necessary function given the lack of output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates by providing the complete return structure inline. For a 7-parameter list endpoint, this provides sufficient context for an agent to understand what data will be returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all 7 parameters have descriptions), establishing a baseline of 3. The description does not add semantic meaning beyond the schema (e.g., it doesn't explain the relationship between page/limit or the format of the signature), but it doesn't need to given the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Query Sub-account List' with scope '(For Master Account)' and specifies the return structure. However, it does not explicitly differentiate from sibling tools like get_sapi_v1_managed_subaccount_info or get_sapi_v1_sub_account_assets, leaving some ambiguity about which sub-account tool to use for specific needs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes '(For Master Account)' which implies authorization scope, but provides no explicit guidance on when to use this versus other sub-account endpoints (like get_sapi_v1_sub_account_futures_account_summary) or prerequisites for the master account.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_margin_accountCInspect
Detail on Sub-account's Margin Account (For Master Account) — Weight(IP): 10 Returns: { email: string, marginLevel: string, totalAssetOfBtc: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It adds valuable API-specific behavior: 'Weight(IP): 10' (rate limit cost) and previews return fields (email, marginLevel, totalAssetOfBtc). However, it omits safety profile disclosure (read-only status) and error conditions that annotations would typically provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence crams purpose, weight metadata, and return structure together using em-dashes. While not verbose, the structure is suboptimal—technical metadata (Weight(IP)) obscures the main purpose and the inline JSON return hint is cluttered.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter read operation with 100% schema coverage but no output schema, the description compensates by listing return fields inline. However, it lacks error handling context, authentication flow explanation beyond 'signature', and explicit read-only assurance given the absence of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description does not add semantic context beyond the schema (e.g., no explanation of 'recvWindow' usage patterns or email format requirements), but the schema adequately documents parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses 'Detail on' as a weak verb phrase but identifies the resource (Sub-account's Margin Account) and adds '(For Master Account)' context. However, it fails to clearly distinguish from the sibling 'get_sapi_v1_sub_account_margin_account_summary' tool—while 'Detail' implies specificity versus 'Summary', this contrast is not explicit enough to guide selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Only provides the constraint '(For Master Account)' implying caller requirements. Missing explicit guidance on when to use this specific endpoint versus the summary endpoint or other sub-account tools, and lacks prerequisite information (e.g., sub-account must exist).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_margin_account_summaryAInspect
Summary of Sub-account's Margin Account (For Master Account) — Weight(IP): 10 Returns: { totalAssetOfBtc: string, totalLiabilityOfBtc: string, totalNetAssetOfBtc: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description adds valuable behavioral context by specifying the IP weight/rate limit (10) and detailing the return structure with specific fields (totalAssetOfBtc, etc.), though it doesn't explicitly confirm the read-only nature or error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence structure efficiently packs the purpose, rate limit, and return schema without extraneous text, though the dense formatting slightly reduces readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately documents the return format. It covers the master-account context and rate limiting, though it could specify pagination or error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow), meeting the baseline expectation without requiring additional parameter semantics from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the resource (Sub-account's Margin Account) and action (Summary), with the parenthetical '(For Master Account)' effectively distinguishing it from regular account queries and indicating the required permission level.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the '(For Master Account)' constraint provides implicit usage guidance, the description lacks explicit comparison to similar endpoints like `get_sapi_v1_sub_account_margin_account` or prerequisites for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_spot_summaryAInspect
Sub-account Spot Assets Summary (For Master Account) — Get BTC valued asset summary of subaccounts. Weight(IP): 1 Returns: { totalCount: number, masterAccountTotalAsset: string, spotSubUserAssetBtcVoList: { email: string, totalAsset: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| size | No | Default:10 Max:20 | |
| No | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It successfully discloses rate limiting ('Weight(IP): 1'), access scope (Master Account only), and provides the complete return structure including nested object fields. It does not explicitly state read-only safety or error behaviors, but the return type documentation adds significant value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with high information density: a descriptive title prefix, the core purpose, rate limit weight, and return type signature. Every element serves a distinct purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of a structured output schema, the description compensates by manually documenting the return structure including field types. Combined with 100% input schema coverage, this provides sufficient completeness for invocation, though it could benefit from mentioning pagination behavior or error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameters are well-documented in the schema itself (page, size, email, signature, timestamp, recvWindow). The description adds no additional parameter semantics beyond the schema, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Get BTC valued asset summary), resource (subaccounts), and scope (Spot Assets, For Master Account). It effectively distinguishes from sibling tools like get_sapi_v1_sub_account_futures_account_summary and get_sapi_v1_sub_account_margin_account_summary by explicitly specifying 'Spot'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through '(For Master Account)' and the specific 'Spot' asset focus, but lacks explicit when-to-use guidance or comparisons with alternatives like get_sapi_v3_sub_account_assets or get_sapi_v4_sub_account_assets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_statusAInspect
Sub-account's Status on Margin/Futures (For Master Account) — - If no email sent, all sub-accounts' information will be returned. Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
| No | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit ('Weight(IP): 10') and authorization scope ('For Master Account'), and explains the bulk retrieval behavior. However, it omits safety characteristics (read-only nature), response structure details, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently front-loaded with the purpose statement, followed by the conditional parameter behavior and rate limit. The em-dash construction is slightly awkward ('— -'), but every component earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, simple object structure) and lack of output schema, the description adequately covers the essential operational context. However, it could improve by specifying what 'Status on Margin/Futures' entails (e.g., enabled/disabled states, positions, or risk metrics).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% description coverage (baseline 3), the description adds valuable semantic context beyond the schema by explaining the behavioral consequence of omitting the email parameter (returns all sub-accounts). This compensates for the schema's lack of behavioral semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource (Sub-account's Status) and scope (Margin/Futures, For Master Account). It specifies the domain precisely, though it could better differentiate from sibling tools like get_sapi_v1_sub_account_futures_account or get_sapi_v1_sub_account_margin_account that also retrieve sub-account information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit guidance by stating 'If no email sent, all sub-accounts' information will be returned,' which clarifies the optional parameter behavior. However, it lacks explicit when-to-use guidance comparing this to alternative sub-account status tools or prerequisites for Master Account access.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_sub_account_api_ip_restrictionBInspect
Get IP Restriction for a Sub-account API Key (For Master Account) — Weight(UID): 3000 Returns: { ipRestrict: string, ipList: string[], updateTime: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| subAccountApiKey | Yes | query parameter: subAccountApiKey (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context by documenting the Weight(UID): 3000 rate limit and the return structure ({ ipRestrict, ipList... }), but fails to explicitly state this is a read-only operation or detail error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence efficiently packs the purpose, rate limit weight, and return structure without filler. However, the density makes it slightly less readable than structured multi-sentence descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter retrieval tool with no output schema, the description compensates adequately by documenting the return structure inline. However, it lacks completeness regarding error handling, pagination, or detailed authorization requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (email, signature, timestamp, etc. are all documented), establishing a baseline of 3. The description itself does not add additional parameter semantics beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Get), resource (IP Restriction), and target (Sub-account API Key). The '(For Master Account)' phrase helps distinguish authorization scope, though it could be more explicit about what master account access implies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While '(For Master Account)' hints at a permission requirement, there is no explicit guidance on when to use this GET endpoint versus the sibling POST (post_sapi_v2_sub_account_sub_account_api_ip_restriction) or DELETE (delete_sapi_v1_sub_account_sub_account_api_ip_restriction_ip) alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_sub_transfer_historyBInspect
Sub-account Spot Asset Transfer History (For Master Account) — - fromEmail and toEmail cannot be sent at the same time. - Return fromEmail equal master account email by default. Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| limit | No | Default 1 | |
| endTime | No | UTC timestamp in ms | |
| toEmail | No | Sub-account email | |
| fromEmail | No | Sub-account email | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight ('Weight(IP): 1') and default parameter behavior, but fails to describe the return format, pagination behavior, or error conditions for this 9-parameter query tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief but awkwardly structured with em-dashes serving as bullet points. All sentences contain essential information (purpose, constraints, rate limit), but the formatting could be cleaner for better readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and output schema, the description should have explained the return payload structure and pagination behavior. While it covers the Master Account context and parameter relationships, the lack of return value documentation leaves a significant gap for a history-query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage documenting individual fields, the description adds crucial semantic constraints not captured in the schema: the mutual exclusivity of fromEmail/toEmail and the default value logic for fromEmail. This adds meaningful value beyond the structured definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as querying 'Sub-account Spot Asset Transfer History' and specifies the target user as 'Master Account'. The 'Spot Asset' qualifier helps distinguish it from sibling tools handling futures or universal transfers, though it could explicitly contrast with tools like get_sapi_v1_sub_account_universal_transfer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical parameter constraints (fromEmail and toEmail are mutually exclusive) and notes the default fromEmail behavior. However, it lacks explicit guidance on when to choose this tool versus similar siblings like get_sapi_v1_sub_account_transfer_sub_user_history or prerequisites beyond the parenthetical 'For Master Account'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_transaction_statisticsCInspect
Query Sub-account Transaction Statistics (For Master Account) — Query Sub-account Transaction statistics (For Master Account). Weight(UID): 60 Returns: { recent30BtcTotal: string, recent30BtcFuturesTotal: string, recent30BtcMarginTotal: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | query parameter: email (string) | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limit weight ('Weight(UID): 60') and provides a partial return value schema showing 30-day BTC totals. However, it lacks explicit safety profile (read-only status), error behavior, or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description wastefully repeats the same phrase twice ('Query Sub-account Transaction Statistics... — Query Sub-account Transaction statistics...'). While it efficiently packs rate limit and return value information into one line, the repetition significantly degrades structural quality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter query tool with 100% schema coverage and no output schema, the description adequately compensates by listing key return fields (recent30BtcTotal, etc.). However, it fails to explain what these statistics represent (e.g., volume, count, value) or the time period granularity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description adds no parameter semantics beyond the schema (e.g., no explanation of signature generation format, email validation, or timestamp precision requirements).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool queries sub-account transaction statistics and specifies 'For Master Account', which distinguishes it from other sub-account tools. However, it suffers from tautological repetition ('Query... — Query...') that restates the name without adding specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The only usage cue is '(For Master Account)', implying it requires master account privileges. There is no guidance on when to use this versus sibling tools like get_sapi_v1_sub_account_list or get_sapi_v1_sub_account_spot_summary, nor prerequisites for the email parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_transfer_sub_user_historyAInspect
Sub-account Transfer History (For Sub-account) — - If type is not sent, the records of type 2: transfer out will be returned by default. - If startTime and endTime are not sent, the recent 30-day data will be returned. Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | * `1` - transfer in * `2` - transfer out | |
| asset | No | query parameter: asset (string) | |
| limit | No | Default 500; max 1000. | |
| endTime | No | UTC timestamp in ms | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully documents the IP rate limit weight (1) and parameter defaults, but omits response format details, pagination behavior, or authentication requirements beyond the signature parameter. It confirms this is a read operation through context but doesn't explicitly state idempotency or safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with the purpose front-loaded, followed by bulleted behavioral constraints. The 'Weight(IP): 1' metric is concisely included. Minor formatting awkwardness with the em-dash and hyphen-bullet structure prevents a perfect score, but every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter query tool with 100% schema coverage and no output schema, the description adequately covers the critical invocation defaults. However, given the lack of annotations and output schema, it should ideally mention the authentication context (sub-account API key requirements) or provide hints about the returned data structure (e.g., list of transfers with tx IDs).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (establishing a baseline of 3), the description adds crucial semantic information not present in the schema: the default value for 'type' (transfer out/2) and the default 30-day lookback window when time parameters are omitted. This prevents common invocation errors.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource (Sub-account Transfer History) and scope (For Sub-account), using specific domain terminology. It distinguishes from sibling tools like 'get_sapi_v1_sub_account_sub_transfer_history' by explicitly noting this is for sub-account users rather than master accounts, though it could more explicitly contrast the two use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides valuable default behavior guidance (type defaults to 2/transfer out, 30-day window if no time range specified), which helps agents invoke the tool correctly. However, it lacks explicit guidance on when to choose this tool over the master-account transfer history endpoint or other similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_sub_account_universal_transferAInspect
Universal Transfer History (For Master Account) — - fromEmail and toEmail cannot be sent at the same time. - Return fromEmail equal master account email by default. - The query time period must be less then 30 days. - If startTime and endTime not sent, return records of the last 30 days by default. Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| limit | No | Default 500, Max 500 | |
| endTime | No | UTC timestamp in ms | |
| toEmail | No | Sub-account email | |
| fromEmail | No | Sub-account email | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| clientTranId | No | query parameter: clientTranId (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses rate limiting ('Weight(IP): 1') and default return behavior ('Return fromEmail equal master account email by default'), but does not explicitly confirm read-only safety, required permissions, or error response patterns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with the purpose statement followed by constraint bullets. The formatting is slightly awkward (using '— -' before bullets), but every sentence provides essential constraint or default information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 10 parameters with full schema coverage and no output schema, the description provides appropriate completeness by documenting critical business logic constraints (time limits, email exclusivity) and rate limiting that would not be evident from the schema alone. It adequately supports tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds valuable constraint semantics beyond the schema: the mutual exclusivity of fromEmail/toEmail, the 30-day time window limitation, and default behaviors for omitted time parameters. This compensates adequately for the schema's lack of constraint documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Universal Transfer History (For Master Account)', providing a specific verb (get/history), resource (universal transfers), and scope (master account). This effectively distinguishes it from siblings like get_sapi_v1_sub_account_sub_transfer_history or get_sapi_v1_sub_account_futures_internal_transfer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit constraints defining when the tool can be used: 'fromEmail and toEmail cannot be sent at the same time', 'query time period must be less then 30 days', and explains defaults if time params are omitted. However, it lacks explicit comparison to sibling alternatives (e.g., when to use this vs. sub_transfer_history).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v1_system_statusBInspect
[DISCOVERY] System Status (System) — Fetch system status. Weight(IP): 1 Returns: { status: number, msg: string }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden. It adds valuable rate-limiting context ('Weight(IP): 1') and documents the return structure ('{ status: number, msg: string }') since no output schema exists. However, it omits authentication requirements, error conditions, and status code semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence packs essential information but is cluttered with metadata tags ('[DISCOVERY]', '(System)', 'Weight(IP): 1') that reduce readability. The return value documentation is useful but cramped at the end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameter-less diagnostic endpoint, the description adequately compensates for the missing output schema by specifying the return structure. It could be improved by noting authentication requirements or interpreting status code values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema coverage, establishing a baseline of 4. No parameter description is needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States clear verb ('Fetch') and resource ('system status'), and the '(System)' parenthetical helps distinguish from sibling 'get_sapi_v1_account_status'. However, it doesn't differentiate from similar diagnostic endpoints like 'get_api_v3_ping' or 'get_api_v3_time'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to check system status versus account status, or when to use this instead of the ping/time endpoints. No prerequisites or alternative tools are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_eth_staking_accountCInspect
ETH Staking account V2(USER_DATA) — Weight(IP): 150 Returns: { holdingInETH: string, holdings: { wbethAmount: string, bethAmount: string }, thirtyDaysProfitInETH: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the IP rate limit weight (150) and the return data structure, but fails to indicate if the operation is read-only, if data is real-time, or potential error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and information-dense, front-loading the version identifier and weight metadata before listing return fields. While slightly cryptic, it wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description partially compensates by documenting key return fields (holdingInETH, holdings structure). However, it omits safety classifications and versioning rationale necessary for a complete picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow). The description adds no parameter-specific context, meeting the baseline expectation when the schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource (ETH Staking account V2) and implies retrieval via the return structure, but lacks a specific verb and fails to differentiate from sibling staking tools (e.g., how V2 differs from V1 history endpoints or the staking action endpoint).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like the ETH staking history endpoints or quota endpoint. No prerequisites or conditions are mentioned beyond the implicit USER_DATA tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_loan_flexible_borrow_historyAInspect
Borrow - Get Flexible Loan Borrow History (USER_DATA) — - If startTime and endTime are not sent, the recent 90-day data will be returned. - The max interval between startTime and endTime is 180 days. Weight(IP): 400 Returns: { total: number, rows: { loanCoin: string, initialLoanAmount: string, collateralCoin: string, initialCollateralAmount: string, borrowTime: number, status: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses rate limit weight (400), default time range behavior, max interval constraints, and return object structure with field types. 'USER_DATA' tag indicates authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information density is good but formatting is awkward with redundant 'Borrow - Get... Borrow History' prefix and messy '— -' punctuation. Embedding JSON return structure inline reduces readability though it provides necessary output schema information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter read-only history endpoint without structured output schema, the description adequately compensates by providing inline return type documentation and critical query constraints (time limits, pagination hints via limit/current in schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds valuable semantic context for startTime/endTime parameters regarding the 90-day default lookback and 180-day maximum interval constraints, elevating it above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get Flexible Loan Borrow History' with specific resource identification. However, it does not differentiate from sibling tools like 'get_sapi_v1_loan_borrow_history' (v1 vs v2) or 'get_sapi_v2_loan_flexible_ongoing_orders' (history vs active orders), which could cause selection ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit time window constraints (90-day default, 180-day max interval) which guides effective usage. However, lacks guidance on when to use this versus the v1 loan history endpoint or the flexible ongoing orders endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_loan_flexible_collateral_dataBInspect
Get Flexible Loan Collateral Assets Data (USER_DATA) — Get LTV information and collateral limit of flexible loan's collateral assets. The collateral limit is shown in USD value. Weight(IP): 400 Returns: { rows: { collateralCoin: string, initialLTV: string, marginCallLTV: string, liquidationLTV: string, maxLimit: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the IP weight/rate limit (400) and documents the complete return structure including row fields and pagination (total). It does not, however, clarify error handling or cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the operation type, but the return structure documentation is crammed into a single JSON blob at the end without formatting, reducing readability. Every sentence contains information, though the structure could be cleaner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates effectively by detailing the exact return structure including all row fields (collateralCoin, initialLTV, etc.) and the total count. It appropriately covers return values for this 4-parameter data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description mentions 'collateral assets' which aligns with the collateralCoin parameter, but adds no semantic detail about parameter formats, valid ranges, or interdependencies beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves LTV information and collateral limits for flexible loan collateral assets, distinguishing it from sibling VIP and v1 loan endpoints through the 'flexible loan' qualifier. However, it doesn't explicitly contrast usage scenarios with similar siblings like get_sapi_v1_loan_collateral_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes '(USER_DATA)' indicating authentication requirements, but provides no explicit guidance on when to use this v2 flexible endpoint versus the v1 collateral data endpoint or VIP alternatives. No prerequisites or error conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_loan_flexible_loanable_dataBInspect
Get Flexible Loan Assets Data (USER_DATA) — Get interest rate and borrow limit of flexible loanable assets. The borrow limit is shown in USD value. Weight(IP): 400 Returns: { rows: { loanCoin: string, flexibleInterestRate: string, flexibleMinLimit: string, flexibleMaxLimit: string }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses rate limiting (Weight(IP): 400) and return structure via inline JSON schema. However, it omits critical behavioral traits like confirming this is read-only/safe, pagination details despite returning 'total' and 'rows', and error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with purpose, but the inline JSON return schema creates visual clutter and reduces readability. While no sentences are wasted, the structure could better separate behavioral metadata (weight) from return type documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter read operation, the description is reasonably complete. It compensates for the lack of structured output schema by inline-documenting the return type (rows array structure, total count) and specifies USD denomination for borrow limits. Missing only pagination instructions and explicit read-only confirmation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description field adds no parameter-specific context, relying entirely on the schema which has 100% description coverage. Since the schema already documents loanCoin, timestamp, recvWindow constraints, and signature requirements fully, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'interest rate and borrow limit of flexible loanable assets' with specific mention of 'flexible' to distinguish from standard loans (sibling get_sapi_v1_loan_loanable_data). The verb 'Get' and resource 'Flexible Loan Assets Data' are explicit, though it could better contrast with v1 loan tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_sapi_v1_loan_loanable_data or when querying specific coins is appropriate. No prerequisites or exclusions are mentioned beyond the implicit USER_DATA authentication requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_loan_flexible_ltv_adjustment_historyCInspect
Adjust LTV - Get Flexible Loan LTV Adjustment History (USER_DATA) — - If startTime and endTime are not sent, the recent 90-day data will be returned. - The max interval between startTime and endTime is 180 days. Weight(IP): 400 Returns: { rows: { loanCoin: string, collateralCoin: string, direction: string, collateralAmount: string, preLTV: string, afterLTV: string, ... }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and succeeds reasonably well. It specifies the IP weight (400), indicates USER_DATA scope implying authentication requirements, explains the default 90-day lookback behavior, and documents the return structure including the rows array schema and total count field.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description packs necessary information but uses inconsistent formatting (em-dash, bullet dashes, inline JSON) that reduces readability. The 'Adjust LTV' prefix appears to be copy-paste residue from the POST variant, creating initial confusion without adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description compensates for the missing output schema by inline-documenting the return structure (rows array with loanCoin, collateralCoin, direction fields). However, it lacks domain context explaining what 'flexible loan' or LTV means, and doesn't clarify pagination behavior beyond the parameter definitions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although the input schema has 100% description coverage, the description adds valuable behavioral context about startTime and endTime interactions (90-day default when omitted, 180-day max interval) that pure schema descriptions cannot express. This goes beyond the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'Adjust LTV' which misleadingly implies this is a mutation tool, contradicting the 'get_' prefix and 'Get... History' phrase that follows. It fails to distinguish from the sibling POST tool (post_sapi_v2_loan_flexible_adjust_ltv) or the v1 equivalent (get_sapi_v1_loan_ltv_adjustment_history), leaving ambiguity about which flexible loan history endpoint to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description provides time range constraints (90-day default, 180-day max interval), it offers no guidance on when to use this history tool versus other loan history siblings like get_sapi_v2_loan_flexible_borrow_history or get_sapi_v1_loan_ltv_adjustment_history. No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_loan_flexible_ongoing_ordersBInspect
Borrow - Get Flexible Loan Ongoing Orders (USER_DATA) — Weight(IP): 300 Returns: { total: number, rows: { loanCoin: string, totalDebt: string, collateralCoin: string, collateralAmount: string, currentLTV: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| current | No | Current querying page. Start from 1. Default:1 | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully documents the rate limit ('Weight(IP): 300') and inline return structure (total/rows schema), which is valuable since no output schema exists. However, it fails to disclose whether the operation is read-only, idempotent, or cached, and omits error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the essential operation name and category. It efficiently packs the endpoint classification, rate limit, and return structure into a single sentence without redundant phrasing. However, the dense formatting (using em-dashes and inline JSON) slightly hinders readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and structured output schema, the description compensates partially by documenting the return structure inline. Parameter documentation is complete via the schema. However, the description lacks context on pagination behavior (despite having 'current' and 'limit' params), filtering logic ('loanCoin'/'collateralCoin' interactions), and the meaning of fields like 'currentLTV' (Loan-to-Value).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (all 7 parameters documented with types, defaults, and constraints). The description adds no additional parameter semantics beyond the schema, but the baseline score of 3 is appropriate here since the schema is fully self-documenting. No parameter examples or format guidance are provided in the description text itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Get'), resource ('Flexible Loan Ongoing Orders'), and distinguishes from siblings using specific domain terms ('Flexible' vs VIP/standard, 'Ongoing' vs history). The 'Borrow' category prefix and 'USER_DATA' classification provide additional context. However, it could more explicitly differentiate from the v1 endpoint (get_sapi_v1_loan_ongoing_orders) or the borrow history endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides technical metadata (Weight(IP): 300, USER_DATA) but lacks explicit usage guidance. It does not state when to use this tool versus alternatives like get_sapi_v2_loan_flexible_borrow_history (ongoing vs historical) or get_sapi_v1_loan_vip_ongoing_orders (flexible vs VIP), nor does it mention prerequisites like authentication requirements beyond the implicit 'USER_DATA' label.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_loan_flexible_repay_historyAInspect
Repay - Get Flexible Loan Repayment History (USER_DATA) — - If startTime and endTime are not sent, the recent 90-day data will be returned. - The max interval between startTime and endTime is 180 days. Weight(IP): 400 Returns: { rows: { loanCoin: string, repayAmount: string, collateralCoin: string, collateralReturn: string, repayStatus: string, repayTime: number }[], total: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 500; max 1000. | |
| current | No | Current querying page. Start from 1. Default:1 | |
| endTime | No | UTC timestamp in ms | |
| loanCoin | No | Coin loaned | |
| signature | Yes | Signature | |
| startTime | No | UTC timestamp in ms | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| collateralCoin | No | Coin used as collateral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and excels by disclosing the rate limit (Weight(IP): 400) and providing a complete inline output schema documenting the rows array structure and total count.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
All information is valuable, but formatting is awkward with inconsistent dash usage and the ambiguous 'Repay -' prefix. The single dense paragraph format hinders readability despite containing necessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of structured output schema and annotations, the description comprehensively covers pagination behavior, time range rules, rate limiting, and return data structure—sufficient for an AI agent to invoke and handle responses correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds crucial semantic context about how startTime and endTime interact (90-day default behavior, 180-day maximum range) that is not evident from individual parameter descriptions alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (Flexible Loan Repayment History), with 'Flexible' distinguishing it from regular loan history siblings (e.g., get_sapi_v1_loan_repay_history) and 'Repayment' distinguishing from borrow history siblings (e.g., get_sapi_v2_loan_flexible_borrow_history).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit time window constraints (90-day default when no times sent, 180-day max interval) and notes the USER_DATA permission requirement. Does not explicitly name alternative tools, but the constraints are critical for correct usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_portfolio_collateral_rateCInspect
Portfolio Margin Pro Tiered Collateral Rate(USER_DATA) — Portfolio Margin PRO Tiered Collateral Rate Weight(IP): 50
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit weight (Weight(IP): 50) and authentication scope (USER_DATA), which is valuable. However, it lacks information about the response structure, pagination, or what the 'tiered' collateral rate calculation entails.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive ('Portfolio Margin Pro' vs 'Portfolio Margin PRO'). The em-dash construction creates density. While not verbose, the technical format assumes significant domain knowledge without earning each term's place through explanation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks output schema documentation and doesn't describe return values in the description. For a data retrieval tool with no annotations and no output schema, the description should explain what data structure or collateral rate information is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow all documented). The description adds no parameter-specific context beyond the schema, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the specific resource (Portfolio Margin Pro Tiered Collateral Rate) and scope (USER_DATA), distinguishing it from unrelated sibling tools. However, it uses dense technical jargon without explaining business context, and fails to differentiate from the v1 equivalent (get_sapi_v1_portfolio_collateral_rate) also present in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (such as the v1 version or other portfolio margin endpoints). The USER_DATA tag implies authentication requirements but doesn't explicitly state prerequisites or use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_sub_account_futures_accountBInspect
Detail on Sub-account's Futures Account V2 (For Master Account) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| futuresType | Yes | * `1` - USDT Margined Futures * `2` - COIN Margined Futures |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It provides valuable rate-limiting information ('Weight(IP): 1'), but lacks other behavioral context such as caching characteristics, pagination behavior, or whether the response includes sensitive data requiring specific handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the core purpose, though 'Detail on' is grammatically awkward (prefer 'Get details of' or 'Retrieve'). The rate limit weight is appended efficiently with an em-dash.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description should indicate what data is returned (balances, positions, margin info, etc.). It provides no hint of the response structure, leaving the agent uncertain about what information this endpoint actually yields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (including futuresType enum mapping and recvWindow constraints), so the baseline is 3. The description adds no additional parameter semantics beyond the schema, but the schema is sufficiently comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the specific resource (Sub-account's Futures Account V2) and required permission level (For Master Account), distinguishing it from V1 endpoints and non-master contexts. However, it uses the noun phrase 'Detail on' rather than an explicit verb like 'Retrieve', which slightly weakens clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'For Master Account' constraint implies usage scope, but the description fails to specify when to use this detailed endpoint versus the sibling summary endpoint (get_sapi_v2_sub_account_futures_account_summary) or when to prefer V2 over V1.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_sub_account_futures_account_summaryBInspect
Summary of Sub-account's Futures Account V2 (For Master Account) — Weight(IP): 10
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Default 1 | |
| limit | No | Default 10, Max 20 | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| futuresType | Yes | * `1` - USDT Margined Futures * `2` - COIN Margined Futures |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It includes rate limit information ('Weight(IP): 10') which is valuable behavioral context, but omits safety characteristics (read-only vs mutation), error conditions, or pagination behavior details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse single-line description. While it avoids fluff and front-loads the purpose, it is excessively minimal for the tool's complexity, omitting critical context that would aid agent decision-making.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 6 parameters, complex domain (futures/sub-accounts), no output schema, and no annotations, the description should explain return values and data scope. It provides none of this, leaving the agent unaware of what summary data to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds no parameter-specific guidance beyond what the schema already documents (e.g., no additional context on futuresType values or signature generation).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the specific resource (Sub-account's Futures Account Summary), API version (V2), and authorization context (For Master Account), distinguishing it from V1 variants and detailed account endpoints. However, it lacks specificity about what data the summary contains (e.g., balances, positions, margin).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides only the authorization context '(For Master Account)' but offers no guidance on when to prefer this V2 endpoint over V1, when to use 'summary' versus the detailed 'account' endpoint, or prerequisites for calling it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v2_sub_account_futures_position_riskBInspect
Futures Position-Risk of Sub-account V2 (For Master Account) — Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| futuresType | Yes | * `1` - USDT Margined Futures * `2` - COIN Margined Futures |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Provides rate limit weight ('Weight(IP): 1'), which is useful behavioral metadata. However, with no annotations provided, the description fails to disclose safety profile (read-only vs mutation), return data structure, or specific risk metrics included in the response. The 'get' prefix implies read-only but this should be explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with no redundant sentences. However, 'Weight(IP): 1' is API-internal jargon that provides limited value to an agent selecting tools, and the description could front-load behavioral context instead.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a financial risk tool with 5 parameters and no output schema. Missing: explanation of what risk data is returned (liquidation price, margin ratio, unrealized PnL), error conditions, or how this relates to the broader sub-account management workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear definitions (including futuresType enum values). The description adds no parameter-specific guidance beyond the schema, but this meets the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies the specific resource (futures position-risk), target entity (sub-account), API version (V2), and authorization context (For Master Account). Distinguishes from the V1 sibling endpoint. Could improve by explaining what 'position-risk' encompasses (e.g., liquidation risk, margin ratios).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage constraints through '(For Master Account)', indicating it requires master account privileges. However, lacks explicit guidance on when to prefer this over V1 or other sub-account endpoints, and doesn't state prerequisites (e.g., sub-account must exist and have futures enabled).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v3_sub_account_assetsBInspect
[DISCOVERY] Sub-account Assets (For Master Account) — Fetch sub-account assets Weight(IP): 1 Returns: { balances: { asset: string, free: number, locked: number }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Sub-account email | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit 'Weight(IP): 1' and provides the return structure inline ('balances: { asset: string, free: number, locked: number }[]'), but lacks explicit safety classification (read-only status) or error behavior disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with a '[DISCOVERY]' metadata tag that consumes space without adding functional value. While the single sentence packs multiple data points (purpose, weight, return type), the formatting is cluttered and could be cleaner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description compensates for the missing output schema by specifying the return structure. However, given the complexity of sub-account management and the existence of a v4 variant, it lacks sufficient context to guide correct tool selection without external knowledge.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all four parameters (email, timestamp, signature, recvWindow) fully documented in the schema. The description adds no additional parameter semantics, which is acceptable given the complete schema coverage, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch sub-account assets' with specific scope 'For Master Account', identifying the verb and resource. However, it fails to distinguish from the sibling 'get_sapi_v4_sub_account_assets', leaving ambiguity about which version to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, particularly the v4 variant. There are no prerequisites mentioned beyond the implied master account requirement, and no exclusion criteria or workflow context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sapi_v4_sub_account_assetsBInspect
[DISCOVERY] Query Sub-account Assets (For Master Account) — Fetch sub-account assets Weight(UID): 60 Returns: { balances: { asset: string, free: string, locked: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | query parameter: email (string) | ||
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context including rate limit weight ('Weight(UID): 60') and return structure ('Returns: { balances... }'), but omits safety classification (read-only status), authentication requirements, and error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but poorly structured. The '[DISCOVERY]' metadata tag appears extraneous, and the dense concatenation of purpose, weight, and return type without clear separation reduces readability. Every element is relevant, but formatting could improve clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by documenting the return structure (balances array with asset/free/locked fields). However, it omits important context for a financial API endpoint such as error scenarios, pagination behavior, or specific permission requirements beyond the 'Master Account' hint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (email, timestamp, signature, recvWindow all documented). The description adds no additional parameter semantics beyond what the schema provides, meeting the baseline expectation for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Query', 'Fetch') and resource ('Sub-account Assets'), and specifies the context '(For Master Account)'. However, it does not explicitly differentiate from the similar sibling tool 'get_sapi_v3_sub_account_assets', leaving ambiguity about version selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context with '(For Master Account)', suggesting master account permissions are required. However, it provides no explicit guidance on when to use this tool versus the v3 alternative or other sub-account asset endpoints, and lacks prerequisites or error condition guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_orderAInspect
New Order (TRADE) — Send in a new order. - LIMIT_MAKER are LIMIT orders that will be rejected if they would immediately match and trade as a taker. - STOP_LOSS and TAKE_PROFIT will execute a MARKET order when the stopPrice is reached. - Any LIMIT or LIMIT_MAKER type order can be made an iceberg order by sending an icebergQty. - Any order with an icebergQty MUST have timeInForce set to GTC. - MARKET orders using quantity specifies how much a user wants to buy or sell based on the market price. - MARKET orders using quoteOrderQty specifies the amount the user want
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| type | Yes | Order type | |
| price | No | Order price | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| quantity | No | Order quantity | |
| signature | Yes | Signature | |
| stopPrice | No | Used with STOP_LOSS, STOP_LOSS_LIMIT, TAKE_PROFIT, and TAKE_PROFIT_LIMIT orders. | |
| timestamp | Yes | UTC timestamp in ms | |
| icebergQty | No | Used with LIMIT, STOP_LOSS_LIMIT, and TAKE_PROFIT_LIMIT to create an iceberg order. | |
| recvWindow | No | The value cannot be greater than 60000 | |
| strategyId | No | query parameter: strategyId (number) | |
| timeInForce | No | Order time in force | |
| strategyType | No | The value cannot be less than 1000000. | |
| quoteOrderQty | No | Quote quantity | |
| trailingDelta | No | Used with STOP_LOSS, STOP_LOSS_LIMIT, TAKE_PROFIT, and TAKE_PROFIT_LIMIT orders. | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| newOrderRespType | No | Set the response JSON. MARKET and LIMIT order types default to FULL, all other orders default to ACK. | |
| selfTradePreventionMode | No | The allowed enums is dependent on what is configured on the symbol. The possible supported values are EXPIRE_TAKER, EXPIRE_MAKER, EXPIRE_BOTH, NONE. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully explains execution behaviors for specific order types (e.g., LIMIT_MAKER rejection conditions, iceberg order constraints, MARKET order quantity behavior). However, it omits critical safety disclosures for a financial operation—no mention of idempotency, balance requirements, fees, or irrevocability of submitted orders.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but poorly structured with inline markdown bullet points and ends abruptly mid-sentence ('specifies the amount the user want'). The truncation significantly impacts readability and suggests missing critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 18-parameter trading tool with no output schema, the description covers order type mechanics thoroughly but is incomplete due to truncation. It lacks explanation of response values, error conditions, or financial risk disclosures that would be expected for a production trading endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage, the description adds significant value by explaining parameter relationships and constraints not captured in the schema, such as the mandatory relationship between icebergQty and timeInForce (MUST be GTC), and how stopPrice triggers MARKET orders for STOP_LOSS types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Send in a new order' with the '(TRADE)' tag indicating it's a live trading operation. It distinguishes from siblings like post_api_v3_order_test by explaining real order execution mechanics (LIMIT_MAKER rejection rules, STOP_LOSS triggering), though it could explicitly mention this is not for testing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit PREREQUISITE section instructing users to first call exchange info to discover valid trading pairs. This provides clear guidance on prerequisites, though it lacks explicit 'when not to use' guidance (e.g., mentioning the test endpoint alternative for safe testing).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_order_cancel_replaceAInspect
Cancel an Existing Order and Send a New Order (Trade) — Cancels an existing order and places a new order on the same symbol. Filters and Order Count are evaluated before the processing of the cancellation and order placement occurs. A new order that was not attempted (i.e. when newOrderResult: NOT_ATTEMPTED), will still increase the order count by 1. Weight(IP): 1 Returns: { cancelResult: string, newOrderResult: string, cancelResponse: { symbol: string, origClientOrderId: string, orderId: number, orderListId: number, clientOrderId: string, price: string, ... }, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| type | Yes | Order type | |
| price | No | Order price | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| quantity | No | Order quantity | |
| signature | Yes | Signature | |
| stopPrice | No | Used with STOP_LOSS, STOP_LOSS_LIMIT, TAKE_PROFIT, and TAKE_PROFIT_LIMIT orders. | |
| timestamp | Yes | UTC timestamp in ms | |
| icebergQty | No | Used with LIMIT, STOP_LOSS_LIMIT, and TAKE_PROFIT_LIMIT to create an iceberg order. | |
| recvWindow | No | The value cannot be greater than 60000 | |
| strategyId | No | query parameter: strategyId (number) | |
| timeInForce | No | Order time in force | |
| strategyType | No | The value cannot be less than 1000000. | |
| cancelOrderId | No | Either the cancelOrigClientOrderId or cancelOrderId must be provided. If both are provided, cancelOrderId takes precedence. | |
| quoteOrderQty | No | Quote quantity | |
| trailingDelta | No | Used with STOP_LOSS, STOP_LOSS_LIMIT, TAKE_PROFIT, and TAKE_PROFIT_LIMIT orders. | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| newOrderRespType | No | Set the response JSON. MARKET and LIMIT order types default to FULL, all other orders default to ACK. | |
| cancelReplaceMode | Yes | - `STOP_ON_FAILURE` If the cancel request fails, the new order placement will not be attempted. - `ALLOW_FAILURES` If new order placement will be attempted even if cancel request fails. | |
| cancelRestrictions | No | query parameter: cancelRestrictions ("ONLY_NEW" | "ONLY_PARTIALLY_FILLED") | |
| cancelNewClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| cancelOrigClientOrderId | No | Either the cancelOrigClientOrderId or cancelOrderId must be provided. If both are provided, cancelOrderId takes precedence. | |
| selfTradePreventionMode | No | The allowed enums is dependent on what is configured on the symbol. The possible supported values are EXPIRE_TAKER, EXPIRE_MAKER, EXPIRE_BOTH, NONE. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant burden: it discloses Weight(IP): 1 (rate limiting), the specific side effect that order count increases even when newOrderResult is NOT_ATTEMPTED, and evaluation order of filters. Missing explicit atomicity guarantees or failure mode details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with no wasted sentences. Front-loaded with the core action, followed by behavioral details, return structure, and prerequisites. The inline JSON return structure is slightly verbose but necessary given the lack of output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 23-parameter trading operation with no annotations, the description adequately covers prerequisites, rate limits, return value structure, and key behavioral quirks (order counting), compensating for missing structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 23 parameters. The description does not add significant semantic meaning beyond what the schema provides, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Cancels an existing order and places a new order on the same symbol,' using specific verbs and clearly differentiating it from sibling tools like post_api_v3_order (place only) and delete_api_v3_order (cancel only).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes explicit PREREQUISITE to call exchange info first, and describes behavioral constraints (order count evaluation timing). Could be improved by explicitly stating when to use this atomic operation versus separate cancel+place calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_order_list_ocoAInspect
New Order list - OCO (TRADE) — Send in an one-cancels-the-other (OCO) pair, where activation of one order immediately cancels the other. - An OCO has 2 orders called the above order and below order. - One of the orders must be a LIMIT_MAKER order and the other must be STOP_LOSS orSTOP_LOSS_LIMIT order. - Price restrictions: - If the OCO is on the SELL side: LIMIT_MAKER price > Last Traded Price > stopPrice - If the OCO is on the BUY side: LIMIT_MAKER price < Last Traded Price < stopPrice - OCOs add 2 orders to the unfilled order count, EXCHANGE_MAX_ORDERS filter, and the
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| quantity | Yes | query parameter: quantity (number) | |
| aboveType | Yes | Supported values : `STOP_LOSS_LIMIT`, `STOP_LOSS`, `LIMIT_MAKER` | |
| belowType | Yes | Supported values : `STOP_LOSS_LIMIT`, `STOP_LOSS`, `LIMIT_MAKER` | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| abovePrice | No | query parameter: abovePrice (number) | |
| belowPrice | No | Can be used if `belowType` is `STOP_LOSS_LIMIT` or `LIMIT_MAKER` to specify the limit price. | |
| recvWindow | No | The value cannot be greater than 60000 | |
| aboveStopPrice | No | Can be used if `aboveType` is `STOP_LOSS` or `STOP_LOSS_LIMIT`. Either `aboveStopPrice` or `aboveTrailingDelta` or both, must be specified. | |
| belowStopPrice | No | Can be used if `belowType` is `STOP_LOSS` or `STOP_LOSS_LIMIT`. Either `belowStopPrice` or `belowTrailingDelta` or both, must be specified. | |
| aboveIcebergQty | No | Note that this can only be used if `aboveTimeInForce` is `GTC`. | |
| aboveStrategyId | No | Arbitrary numeric value identifying the above order within an order strategy. | |
| belowIcebergQty | No | Note that this can only be used if `belowTimeInForce` is `GTC`. | |
| belowStrategyId | No | Arbitrary numeric value identifying the below order within an order strategy. | |
| aboveTimeInForce | No | Required if the `aboveType` is `STOP_LOSS_LIMIT`. | |
| belowTimeInForce | No | Required if the `belowType` is `STOP_LOSS_LIMIT`. | |
| newOrderRespType | No | Set the response JSON. MARKET and LIMIT order types default to FULL, all other orders default to ACK. | |
| aboveStrategyType | No | Arbitrary numeric value identifying the above order strategy. Values smaller than 1000000 are reserved and cannot be used. | |
| belowStrategyType | No | Arbitrary numeric value identifying the below order strategy. Values smaller than 1000000 are reserved and cannot be used. | |
| listClientOrderId | No | Arbitrary unique ID among open order lists. Automatically generated if not sent. A new order list with the same `listClientOrderId` is accepted only when the previous one is filled or completely expired. `listClientOrderId` is distinct from the `aboveClientOrderId` and the `belowCLientOrderId`. | |
| aboveClientOrderId | No | Arbitrary unique ID among open orders for the above order. Automatically generated if not sent | |
| aboveTrailingDelta | No | query parameter: aboveTrailingDelta (number) | |
| belowClientOrderId | No | Arbitrary unique ID among open orders for the below order. Automatically generated if not sent | |
| belowTrailingDelta | No | query parameter: belowTrailingDelta (number) | |
| selfTradePreventionMode | No | The allowed enums is dependent on what is configured on the symbol. The possible supported values are EXPIRE_TAKER, EXPIRE_MAKER, EXPIRE_BOTH, NONE. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses OCO cancellation mechanics, price restriction rules for BUY vs SELL sides, and side effects ('adds 2 orders to the unfilled order count'). Missing standard API behavioral notes like rate limits, authentication requirements, or error scenarios, but covers the complex trading logic well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear delineation: title/mechanism, order composition rules, price restrictions, impact notes, and prerequisites. Uses bullet-style dashes effectively. Minor deduction for the truncated final sentence ('and the '), but otherwise efficiently information-dense without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 27-parameter trading tool with no output schema, the description provides substantial context: OCO mechanics, prerequisites, order type constraints, and price restrictions. Covers the business logic necessary for correct invocation. Could be improved by describing expected response structure or common error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage, establishing baseline 3. The description adds significant value by explaining parameter relationships: the price hierarchy constraints (LIMIT_MAKER price > Last Traded Price > stopPrice for SELL), and the valid type combinations for aboveType/belowType. This contextualizes how parameters interact beyond individual schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool as creating an OCO (one-cancels-the-other) order pair with specific mechanics: 'activation of one order immediately cancels the other.' It distinguishes from siblings by detailing the unique OCO structure requiring one LIMIT_MAKER and one STOP_LOSS/STOP_LOSS_LIMIT order, clearly differentiating it from OTO or single order tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes explicit prerequisite: 'First call exchange info or symbol listing to discover valid trading pairs, then query market data.' Provides clear constraints on order type combinations (LIMIT_MAKER + STOP_LOSS) that determine when this tool is appropriate. However, lacks explicit comparison to alternatives like post_api_v3_order_list_oto or single orders.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_order_list_otoAInspect
New Order List - OTO (TRADE) — Places an OTO. - An OTO (One-Triggers-the-Other) is an order list comprised of 2 orders. - The first order is called the working order and must be LIMIT or LIMIT_MAKER. Initially, only the working order goes on the order book. - The second order is called the pending order. It can be any order type except for MARKET orders using parameter quoteOrderQty. The pending order is only placed on the order book when the working order gets fully filled. - If either the working order or the pending order is cancelled individually, the other order in the order l
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| pendingSide | Yes | BUY,SELL | |
| pendingType | Yes | Supported values: Order Types Note that MARKET orders using quoteOrderQty are not supported. | |
| workingSide | Yes | BUY,SELL | |
| workingType | Yes | Supported values: LIMIT,LIMIT_MAKER | |
| pendingPrice | No | query parameter: pendingPrice (number) | |
| workingPrice | Yes | query parameter: workingPrice (number) | |
| pendingQuantity | Yes | Sets the quantity for the pending order. | |
| workingQuantity | Yes | Sets the quantity for the working order. | |
| newOrderRespType | No | Set the response JSON. | |
| pendingStopPrice | No | query parameter: pendingStopPrice (number) | |
| listClientOrderId | No | Arbitrary unique ID among open order lists. Automatically generated if not sent. A new order list with the same `listClientOrderId` is accepted only when the previous one is filled or completely expired. `listClientOrderId` is distinct from the `workingClientOrderId` and the `pendingClientOrderId`. | |
| pendingIcebergQty | No | This can only be used if pendingTimeInForce is GTC. | |
| pendingStrategyId | No | Arbitrary numeric value identifying the pending order within an order strategy. | |
| workingIcebergQty | Yes | This can only be used if workingTimeInForce is GTC. | |
| workingStrategyId | No | Arbitrary numeric value identifying the working order within an order strategy. | |
| pendingTimeInForce | No | GTC, IOC, FOK | |
| workingTimeInForce | No | GTC, IOC, FOK | |
| pendingStrategyType | No | Arbitrary numeric value identifying the pending order strategy. Values smaller than 1000000 are reserved and cannot be used. | |
| workingStrategyType | No | Arbitrary numeric value identifying the working order strategy. Values smaller than 1000000 are reserved and cannot be used. | |
| pendingClientOrderId | No | Arbitrary unique ID among open orders for the pending order. Automatically generated if not sent. | |
| pendingTrailingDelta | No | query parameter: pendingTrailingDelta (number) | |
| workingClientOrderId | No | Arbitrary unique ID among open orders for the working order. Automatically generated if not sent. | |
| selfTradePreventionMode | No | The allowed enums is dependent on what is configured on the symbol. The possible supported values are EXPIRE_TAKER, EXPIRE_MAKER, EXPIRE_BOTH, NONE. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully explains the execution sequence (working order hits book first, pending only placed upon full fill) and order type constraints. However, the description is cut off mid-sentence when explaining cancellation behavior ('If either... is cancelled individually, the other order in the order l'), leaving a critical behavioral gap regarding what happens when one leg is cancelled.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The structure is logical with a clear header, bullet-point mechanics, and a separated PREREQUISITE section. However, the incomplete sentence regarding cancellation behavior represents a structural failure, and the heavy use of dashes makes it slightly less readable than it could be. The content is appropriately front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (26 parameters) and lack of annotations or output schema, the description adequately covers the input mechanics and business logic of the OTO relationship. However, the cutoff sentence creates a significant gap in the cancellation logic explanation, and there is no mention of success responses, error states, or rate limiting considerations that would be expected for a trading operation of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with individual parameters like workingType and pendingType already documenting their supported values. The description adds conceptual grouping by explaining the 'working' vs 'pending' relationship, but does not add syntax details, format examples, or validation rules beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Places an OTO' and provides a detailed breakdown of what an OTO (One-Triggers-the-Other) entails, including the two-order structure (working and pending) and their specific constraints. It effectively distinguishes this from simple single orders and other order list types through its specific mechanics explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes a clear PREREQUISITE section calling out the need to call exchange info first to discover valid trading pairs. However, it fails to explicitly differentiate when to use OTO versus sibling tools like OCO (post_api_v3_order_list_oco) or OTOCO (post_api_v3_order_list_otoco), which is crucial given the similar naming and overlapping use cases in trading workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_order_list_otocoAInspect
New Order List - OTOCO (TRADE) — Place an OTOCO. - An OTOCO (One-Triggers-One-Cancels-the-Other) is an order list comprised of 3 orders. - The first order is called the working order and must be LIMIT or LIMIT_MAKER. Initially, only the working order goes on the order book. - The behavior of the working order is the same as the OTO. - OTOCO has 2 pending orders (pending above and pending below), forming an OCO pair. The pending orders are only placed on the order book when the working order gets fully filled. - The rules of the pending above and pending below follow the same rule
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| pendingSide | Yes | BUY,SELL | |
| workingSide | Yes | BUY,SELL | |
| workingType | Yes | Supported values: LIMIT,LIMIT_MAKER | |
| workingPrice | Yes | query parameter: workingPrice (number) | |
| pendingQuantity | Yes | Sets the quantity for the pending order. | |
| workingQuantity | Yes | Sets the quantity for the working order. | |
| newOrderRespType | No | Set the response JSON. | |
| pendingAboveType | Yes | Supported values: LIMIT_MAKER, STOP_LOSS, and STOP_LOSS_LIMIT | |
| pendingBelowType | No | Supported values: LIMIT_MAKER, STOP_LOSS, and STOP_LOSS_LIMIT | |
| listClientOrderId | No | Arbitrary unique ID among open order lists. Automatically generated if not sent. A new order list with the same `listClientOrderId` is accepted only when the previous one is filled or completely expired. `listClientOrderId` is distinct from the `workingClientOrderId` and the `pendingClientOrderId`. | |
| pendingAbovePrice | No | query parameter: pendingAbovePrice (number) | |
| pendingBelowPrice | No | query parameter: pendingBelowPrice (number) | |
| workingIcebergQty | Yes | This can only be used if workingTimeInForce is GTC. | |
| workingStrategyId | No | Arbitrary numeric value identifying the working order within an order strategy. | |
| workingTimeInForce | No | GTC, IOC, FOK | |
| workingStrategyType | No | Arbitrary numeric value identifying the working order strategy. Values smaller than 1000000 are reserved and cannot be used. | |
| workingClientOrderId | No | Arbitrary unique ID among open orders for the working order. Automatically generated if not sent. | |
| pendingAboveStopPrice | No | query parameter: pendingAboveStopPrice (number) | |
| pendingBelowStopPrice | No | query parameter: pendingBelowStopPrice (number) | |
| pendingAboveIcebergQty | No | This can only be used if pendingAboveTimeInForce is GTC. | |
| pendingAboveStrategyId | No | Arbitrary numeric value identifying the pending above order within an order strategy. | |
| pendingBelowIcebergQty | No | This can only be used if pendingBelowTimeInForce is GTC. | |
| pendingBelowStrategyId | No | Arbitrary numeric value identifying the pending below order within an order strategy. | |
| pendingAboveTimeInForce | No | query parameter: pendingAboveTimeInForce ("GTC" | "IOC" | "FOK") | |
| pendingBelowTimeInForce | No | query parameter: pendingBelowTimeInForce ("GTC" | "IOC" | "FOK") | |
| selfTradePreventionMode | No | The allowed enums is dependent on what is configured on the symbol. The possible supported values are EXPIRE_TAKER, EXPIRE_MAKER, EXPIRE_BOTH, NONE. | |
| pendingAboveStrategyType | No | Arbitrary numeric value identifying the pending above order strategy. Values smaller than 1000000 are reserved and cannot be used. | |
| pendingBelowStrategyType | No | Arbitrary numeric value identifying the pending below order strategy. Values smaller than 1000000 are reserved and cannot be used. | |
| pendingAboveClientOrderId | No | Arbitrary unique ID among open orders for the pending above order. Automatically generated if not sent. | |
| pendingAboveTrailingDelta | No | query parameter: pendingAboveTrailingDelta (number) | |
| pendingBelowClientOrderId | No | Arbitrary unique ID among open orders for the pending below order. Automatically generated if not sent. | |
| pendingBelowTrailingDelta | No | query parameter: pendingBelowTrailingDelta (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adequately explains execution flow (working order hits book first, pending orders form OCO pair). However, it lacks trading-specific risk disclosures, error handling details, and the final sentence is cut off ('follow the same rule'), leaving behavioral rules for pending orders incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured dashes to separate concepts and front-loads the key functionality. However, the final sentence is an incomplete fragment ('follow the same rule'), which significantly undermines the structure. The prerequisite section is well-placed but the abrupt ending creates a gap.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (36 parameters, trading tool), the description provides necessary conceptual context for the order mechanics but suffers from the incomplete final sentence. No output schema exists, but the description does not address what the API returns (order IDs, statuses, etc.).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing a baseline of 3. The description adds conceptual grouping (working vs pending above/below) that helps organize the 36 parameters logically, but does not add specific validation rules, formats, or inter-parameter constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it places an OTOCO (One-Triggers-One-Cancels-the-Other) order, defines the acronym, and explains the 3-order structure (working order + pending above/below). It explicitly references OTO and OCO mechanics, effectively distinguishing this tool from siblings like post_api_v3_order_list_oto and post_api_v3_order_list_oco.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit PREREQUISITE section requiring exchange info calls first, and explains the triggering condition (pending orders placed only when working order is fully filled). Could be improved by explicitly naming sibling tools (OTO, OCO) as alternatives when different order logic is needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_order_testAInspect
Test New Order (TRADE) — Test new order creation and signature/recvWindow long. Creates and validates a new order but does not send it into the matching engine. Weight(IP): - Without computeCommissionRates: 1 - With computeCommissionRates: 20
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| type | Yes | Order type | |
| price | No | Order price | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| quantity | No | Order quantity | |
| signature | Yes | Signature | |
| stopPrice | No | Used with STOP_LOSS, STOP_LOSS_LIMIT, TAKE_PROFIT, and TAKE_PROFIT_LIMIT orders. | |
| timestamp | Yes | UTC timestamp in ms | |
| icebergQty | No | Used with LIMIT, STOP_LOSS_LIMIT, and TAKE_PROFIT_LIMIT to create an iceberg order. | |
| recvWindow | No | The value cannot be greater than 60000 | |
| strategyId | No | query parameter: strategyId (number) | |
| timeInForce | No | Order time in force | |
| strategyType | No | The value cannot be less than 1000000. | |
| quoteOrderQty | No | Quote quantity | |
| trailingDelta | No | Used with STOP_LOSS, STOP_LOSS_LIMIT, TAKE_PROFIT, and TAKE_PROFIT_LIMIT orders. | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| newOrderRespType | No | Set the response JSON. MARKET and LIMIT order types default to FULL, all other orders default to ACK. | |
| computeCommissionRates | No | Default: false |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully communicates that the order won't enter the matching engine (safety-critical for a trading tool) and documents specific rate-limiting weights ('Weight(IP): 1... 20'). However, it omits what validation errors look like or the response structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections: purpose declaration, behavioral constraint (no matching engine), rate limit details, and prerequisites. Every sentence delivers unique value; there is no redundancy or tautology despite the tool's complexity (18 parameters).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (18 parameters, 5 required, no output schema, no annotations), the description adequately covers the critical safety behavior (non-execution) and operational prerequisites. It lacks description of return values or validation error formats, which would be necessary for a perfect score given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameters are fully documented in the schema itself. The description adds minimal semantic value beyond the schema, merely noting that signature and recvWindow are validated ('signature/recvWindow long'), meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it 'Tests new order creation' and 'Creates and validates a new order but does not send it into the matching engine,' providing a specific verb (test/validate) and clear scope. It effectively distinguishes itself from the sibling 'post_api_v3_order' by emphasizing the non-execution aspect.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes a 'PREREQUISITE' section explicitly instructing users to 'First call exchange info or symbol listing to discover valid trading pairs,' establishing clear workflow sequencing. It implies this should be used before the real order endpoint, though it could explicitly name the sibling 'post_api_v3_order' as the alternative for live trading.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_sor_orderAInspect
New order using SOR (TRADE) — Weight(IP): 6 Returns: { symbol: string, orderId: number, orderListId: number, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| type | Yes | Order type | |
| price | No | query parameter: price (number) | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| quantity | Yes | query parameter: quantity (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| icebergQty | No | Used with LIMIT, STOP_LOSS_LIMIT, and TAKE_PROFIT_LIMIT to create an iceberg order. | |
| recvWindow | No | The value cannot be greater than 60000 | |
| strategyId | No | query parameter: strategyId (number) | |
| timeInForce | No | Order time in force | |
| strategyType | No | The value cannot be less than 1000000. | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| newOrderRespType | No | Set the response JSON. MARKET and LIMIT order types default to FULL, all other orders default to ACK. | |
| selfTradePreventionMode | No | The allowed enums is dependent on what is configured on the symbol. The possible supported values are EXPIRE_TAKER, EXPIRE_MAKER, EXPIRE_BOTH, NONE. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable rate limit info (Weight(IP): 6) and partial return schema structure. However, omits safety context (mutation side effects, authentication requirements beyond the signature parameter) and error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear separation between main description and PREREQUISITE section. Front-loaded with action and weight. The truncated return example with '...' is slightly informal but efficiently conveys output shape.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 15-parameter trading tool with no annotations, the description covers the critical prerequisite workflow and return value hints. However, gaps remain regarding authorization scope, idempotency, and error handling essential for financial operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. The description mentions no specific parameter semantics, relying entirely on the schema. Does not explain the relationship between type/side/quantity or the signature generation requirement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'New order using SOR (TRADE)' which clearly identifies the action and resource type. Distinguishes from sibling post_api_v3_order by specifying SOR (Smart Order Routing), though the '(TRADE)' notation is cryptic and SOR is not explained.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit PREREQUISITE guidance to call exchange info first for valid trading pairs. However, lacks explicit differentiation from post_api_v3_order or guidance on when SOR is preferred over standard orders.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_sor_order_testAInspect
Test new order using SOR (TRADE) — Test new order creation and signature/recvWindow using smart order routing (SOR). Creates and validates a new order but does not send it into the matching engine. Weight(IP): - Without computeCommissionRates: 1 - With computeCommissionRates: 20
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| type | Yes | Order type | |
| price | No | query parameter: price (number) | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| quantity | Yes | query parameter: quantity (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| icebergQty | No | Used with LIMIT, STOP_LOSS_LIMIT, and TAKE_PROFIT_LIMIT to create an iceberg order. | |
| recvWindow | No | The value cannot be greater than 60000 | |
| strategyId | No | query parameter: strategyId (number) | |
| timeInForce | No | Order time in force | |
| strategyType | No | The value cannot be less than 1000000. | |
| newClientOrderId | No | Used to uniquely identify this cancel. Automatically generated by default | |
| newOrderRespType | No | Set the response JSON. MARKET and LIMIT order types default to FULL, all other orders default to ACK. | |
| computeCommissionRates | No | Default: false | |
| selfTradePreventionMode | No | The allowed enums is dependent on what is configured on the symbol. The possible supported values are EXPIRE_TAKER, EXPIRE_MAKER, EXPIRE_BOTH, NONE. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and successfully discloses critical safety behavior: 'Creates and validates a new order but does not send it into the matching engine'. It also provides rate limit weights. Missing details on error handling or validation specifics, but covers the essential safety and cost characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Appropriately sized with clear structure: purpose statement, behavioral constraint (no matching engine), weight specifications, and prerequisites. Front-loaded with key safety information. Minor markdown formatting clutter in the weight section prevents a 5.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 16 parameters with 100% schema coverage and no output schema, the description provides sufficient context. It covers prerequisites, rate limits, and the critical safety guarantee (test-only, no execution). Adequate for a test validation tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'signature/recvWindow' and 'computeCommissionRates' in context, but adds minimal semantic detail beyond the comprehensive schema. No compensation needed given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Test new order using SOR (TRADE)' and 'Test new order creation... using smart order routing (SOR)', clearly identifying the verb (test/create+validate), resource (SOR orders), and scope. It distinguishes from siblings by specifying 'smart order routing' and emphasizing it 'does not send it into the matching engine' versus live trading endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit PREREQUISITE guidance to call exchange info first for valid trading pairs. Includes rate limit weights (1 vs 20) to inform usage decisions. Could be improved by explicitly contrasting with the non-test SOR endpoint (post_api_v3_sor_order) or regular order test endpoint, though this is implied by the name and SOR mention.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_api_v3_user_data_streamAInspect
Create a ListenKey (USER_STREAM) — Start a new user data stream. The stream will close after 60 minutes unless a keepalive is sent. If the account has an active listenKey, that listenKey will be returned and its validity will be extended for 60 minutes. Weight: 2 Returns: { listenKey: string }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden effectively. It documents the timeout behavior (60min), idempotency, weight (2), and exact return structure ({ listenKey: string }). Only minor gaps remain regarding authentication requirements or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent information density and structure. The description front-loads the action, follows with lifecycle constraints, then technical metadata (Weight/Returns). Every sentence conveys unique operational information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool without output schema annotations, the description is complete. It compensates for the missing output schema by explicitly documenting the return object structure and listenKey string format, providing sufficient context for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, triggering the baseline score of 4 per specification rules. No parameter semantics are needed or provided, which is appropriate for this initialization-style endpoint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verbs ('Create', 'Start') and clearly identifies the resource (ListenKey/USER_STREAM). It distinguishes itself from sibling tools like 'delete_api_v3_user_data_stream' and 'put_api_v3_user_data_stream' by describing the creation behavior and 60-minute lifecycle, making the selection unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical timing context ('stream will close after 60 minutes unless a keepalive is sent') and idempotency rules (returns existing key if active). However, it does not explicitly name the sibling tool (put_api_v3_user_data_stream) used for keepalives, nor distinguish when to use this vs the sapi_v1 variant.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_account_disable_fast_withdraw_switchBInspect
Disable Fast Withdraw Switch (USER_DATA) — - This request will disable fastwithdraw switch under your account. - You need to enable "trade" option for the api key which requests this endpoint. Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the rate limit ('Weight(IP): 1') and authentication scope ('USER_DATA'), but does not address idempotency, error states, what constitutes 'fast withdraw', or the response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively compact with three distinct information chunks: the action, the prerequisite, and the weight. There is minor redundancy between the header 'Disable Fast Withdraw Switch' and the bullet point 'This request will disable...', but overall it avoids fluff and presents information efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple account toggle with no output schema, the description covers the essential operational context (what it does, required permissions, rate cost). However, it lacks cross-references to the enable counterpart or explanation of the feature's implications, leaving minor gaps in contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (signature, timestamp, recvWindow are all documented). The description adds no parameter-specific context, but given the schema completeness and the standard nature of these parameters (timestamp/signature), the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool disables the 'fastwithdraw switch' under the account, using specific verb + resource language. It effectively distinguishes from its sibling 'post_sapi_v1_account_enable_fast_withdraw_switch' by virtue of the explicit 'disable' action, though it assumes familiarity with what 'fast withdraw' means without defining it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear prerequisite ('You need to enable "trade" option for the api key'), which helps the agent understand authentication requirements. However, it lacks explicit guidance on when to use this versus the enable counterpart, or under what conditions disabling fast withdraw is recommended.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_account_enable_fast_withdraw_switchAInspect
Enable Fast Withdraw Switch (USER_DATA) — - This request will enable fastwithdraw switch under your account. You need to enable "trade" option for the api key which requests this endpoint. - When Fast Withdraw Switch is on, transferring funds to a Binance account will be done instantly. There is no on-chain transaction, no transaction ID and no withdrawal fee. Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden effectively. It specifies authentication requirements (USER_DATA, trade option), side effects (instant transfers, no transaction ID, no withdrawal fee), and rate limiting (Weight(IP): 1). It could improve by noting idempotency behavior or error cases when already enabled.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The content is appropriately informative but suffers from awkward formatting artifacts ('— -' separator) and inconsistent punctuation. The 'Weight(IP): 1' information is valuable but tacked onto the end without clear separation. Every sentence earns its place, but structural cleanup would improve clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description should ideally disclose the return value or response structure (e.g., confirmation of enabled state). It comprehensively covers the input requirements and functional behavior, but the omission of output documentation leaves a gap for an agent expecting to handle the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (timestamp, signature, recvWindow all documented), establishing a baseline of 3. The description adds no additional parameter context (e.g., that timestamp is UTC in ms), but given the complete schema coverage, no additional compensation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the verb 'enable' and the resource 'fastwithdraw switch' under the account. It explicitly targets the 'Fast Withdraw Switch' functionality and distinguishes itself from the sibling tool 'post_sapi_v1_account_disable_fast_withdraw_switch' by specifying the enable action and describing the 'on' state behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear prerequisites (requires 'trade' option on the API key) and explains the operational context (instant transfers to Binance accounts without fees or on-chain transactions). It lacks only an explicit reference to the disable counterpart as an alternative, though this is implied by the enable-specific language.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_algo_futures_new_order_twapAInspect
Time-Weighted Average Price(Twap) New Order (TRADE) — Send in a Twap new order. Only support on USDⓈ-M Contracts. You need to enable Futures Trading Permission for the api key which requests this endpoint. Base URL: https://api.binance.com - Total Algo open orders max allowed: 10 orders. - Leverage of symbols and position mode will be the same as your futures account settings. You can set up through the trading page or fapi. - Receiving "success": true does not mean that your order will be executed. Please use the query order endpoints(GET sapi/v1/algo/futures/openOrders or GET sapi/v1/algo/fu
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| duration | Yes | Duration for TWAP orders in seconds. [300, 86400];Less than 5min => defaults to 5 min; Greater than 24h => defaults to 24h | |
| quantity | Yes | Quantity of base asset; The notional (quantity * mark price(base asset)) must be more than the equivalent of 10,000 USDT and less than the equivalent of 1,000,000 USDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| limitPrice | No | Limit price of the order; If it is not sent, will place order by market price by default | |
| recvWindow | No | The value cannot be greater than 60000 | |
| reduceOnly | No | 'true' or 'false'. Default 'false'; Cannot be sent in Hedge Mode; Cannot be sent when you open a position | |
| clientAlgoId | No | A unique id among Algo orders (length should be 32 characters), If it is not sent, we will give default value | |
| positionSide | No | Default BOTH for One-way Mode ; LONG or SHORT for Hedge Mode. It must be sent in Hedge Mode. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and delivers substantial behavioral context: max 10 algo orders limit, leverage/position mode inheritance from futures account settings, and the critical async nature of order acceptance vs execution. Missing specific error condition details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but structurally flawed: it truncates mid-sentence ('GET sapi/v1/algo/fu'), mixes formatting styles (em-dash, bullets, ALL CAPS sections), and includes potentially redundant base URL information. The truncation significantly impacts readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the truncation, the description covers critical operational constraints (order limits, permissions, prerequisite calls) for this complex 11-parameter trading tool. However, it lacks output schema and cuts off the query endpoint recommendation, leaving the guidance incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so the baseline is 3. The description adds minimal parameter-specific semantics beyond the schema, though it contextually supports the 'symbol' parameter by restricting to USDⓈ-M Contracts and the 'positionSide' parameter by explaining position mode inheritance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Send in a Twap new order') and resource (Time-Weighted Average Price order for futures). It distinguishes from spot algo orders by specifying 'USDⓈ-M Contracts', though it does not differentiate from the sibling VP (Volume Participation) algo order type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent guidance provided: explicit prerequisite to call exchange info first, required Futures Trading Permission for the API key, warning that 'success': true is asynchronous and does not guarantee execution, and specific follow-up endpoints to query order status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_algo_futures_new_order_vpAInspect
Volume Participation(VP) New Order (TRADE) — Send in a VP new order. Only support on USDⓈ-M Contracts. - You need to enable Futures Trading Permission for the api key which requests this endpoint. - Base URL: https://api.binance.com - Total Algo open orders max allowed: 10 orders. - Leverage of symbols and position mode will be the same as your futures account settings. You can set up through the trading page or fapi. - Receiving "success": true does not mean that your order will be executed. Please use the query order endpoints(GET sapi/v1/algo/futures/openOrders or GET sapi/v1/algo/futures
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| urgency | Yes | Represent the relative speed of the current execution; ENUM: LOW, MEDIUM, HIGH | |
| quantity | Yes | Quantity of base asset; The notional (quantity * mark price(base asset)) must be more than the equivalent of 10,000 USDT and less than the equivalent of 1,000,000 USDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| limitPrice | No | Limit price of the order; If it is not sent, will place order by market price by default | |
| recvWindow | No | The value cannot be greater than 60000 | |
| reduceOnly | No | 'true' or 'false'. Default 'false'; Cannot be sent in Hedge Mode; Cannot be sent when you open a position | |
| clientAlgoId | No | A unique id among Algo orders (length should be 32 characters), If it is not sent, we will give default value | |
| positionSide | No | Default BOTH for One-way Mode ; LONG or SHORT for Hedge Mode. It must be sent in Hedge Mode. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant burden. It discloses critical behavioral traits: the 10-order limit for algo orders, that 'success': true does not guarantee execution (asynchronous processing), and that leverage/position mode inherit from account settings. Missing guidance on idempotency or retry behavior prevents a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While information-dense and front-loaded with the order type, the description suffers from poor structure: bullet points use inconsistent formatting (dashes), and the text is cut off mid-URL at the end ('GET sapi/v1/algo/futures'). The PREREQUISITE section is usefully separated but the overall flow is choppy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex trading tool with 11 parameters and no output schema, the description provides substantial operational context including base URL, permission requirements, rate limits, and execution semantics. It appropriately defers status checking to query endpoints rather than describing a return value, which is sufficient given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed documentation for all 11 parameters including valid values for urgency (LOW, MEDIUM, HIGH) and quantity constraints. The description adds context that this is for USDⓈ-M Contracts only, indirectly constraining the symbol parameter, but does not elaborate on parameter interactions or provide examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly identifies the tool as creating a 'Volume Participation(VP) New Order' with the specific domain 'USDⓈ-M Contracts', clearly distinguishing it from spot or coin-margined futures. The verb 'Send' and resource 'VP new order' are specific, and the VP designation differentiates it from sibling TWAP orders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisites including enabling 'Futures Trading Permission' and calling exchange info first. It directs users to query endpoints (GET sapi/v1/algo/futures/openOrders) to verify execution. However, it lacks explicit differentiation from the sibling TWAP order type (post_sapi_v1_algo_futures_new_order_twap) regarding when to choose VP versus time-weighted strategies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_algo_spot_new_order_twapAInspect
Time-Weighted Average Price (Twap) New Order — Place a new spot TWAP order with Algo service. Weight(UID): 3000 Returns: { clientAlgoId: string, success: boolean, code: number, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| symbol | Yes | Trading symbol, e.g. BNBUSDT | |
| duration | Yes | query parameter: duration (number) | |
| quantity | Yes | query parameter: quantity (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| limitPrice | No | query parameter: limitPrice (number) | |
| recvWindow | No | The value cannot be greater than 60000 | |
| clientAlgoId | No | query parameter: clientAlgoId (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context including the rate limit weight (3000) and return object structure, but lacks explicit disclosure of mutation side effects, authentication requirements (implied by 'signature' parameter), or error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded in the first sentence, followed by technical constraints (weight, returns) and prerequisites. No redundant sentences, though the return object documentation slightly overlaps with what an output schema would provide.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 9-parameter trading operation with no output schema or annotations, the description provides adequate coverage through prerequisites and return shape documentation. However, it lacks explicit safety warnings about financial risk, idempotency guarantees, or detailed error scenarios expected for algorithmic order placement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal parameter-specific semantics beyond the schema, though the TWAP context implicitly clarifies the purpose of the 'duration' parameter for time-weighted execution.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Place a new spot TWAP order') and resource type (Time-Weighted Average Price order via Algo service). It effectively distinguishes from the sibling futures TWAP tool by explicitly specifying 'spot' in both the description and name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite workflow ('First call exchange info or symbol listing...'), giving clear temporal context for when to invoke the tool. However, it does not explicitly contrast with the futures TWAP sibling tool or state when to prefer alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_asset_convert_transferAInspect
Convert Transfer (USER_DATA) — Convert transfer, convert between BUSD and stablecoins. If the clientId has been used before, will not do the convert transfer, the original transfer will be returned. Weight(UID): 5 Returns: { tranId: number, status: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | Yes | query parameter: asset (string) | |
| amount | Yes | query parameter: amount (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| targetAsset | Yes | Target asset you want to convert | |
| clientTranId | Yes | The unique flag, the min length is 20 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully documents idempotency behavior, rate limit cost, and return structure ('Returns: { tranId: number, status: string }'). It implies mutation via 'Convert transfer' but could explicitly state that this creates/executes a conversion transaction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is densely packed and front-loaded with the core purpose. The single sentence covers purpose, idempotency logic, rate limits, and return values with minimal waste. The structure is utilitarian but effective for parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter mutation tool lacking annotations or formal output schema, the description provides sufficient context: operation scope, idempotency rules, rate limit cost, and return payload structure. Missing explicit error handling or permission requirements, but adequately complete for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds critical functional semantics for the clientTranId parameter by explaining its role in idempotency ('If the clientId has been used before...'), which is not evident from the schema's description ('The unique flag, the min length is 20').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action ('Convert transfer') and scope ('between BUSD and stablecoins'), distinguishing it from generic transfers or spot trading found in siblings. However, it does not clarify when to use this specific convert endpoint versus sibling convert tools like post_sapi_v1_convert_accept_quote.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific idempotency guidance ('If the clientId has been used before...') and rate limit weight ('Weight(UID): 5'), which informs usage constraints. Lacks explicit guidance on when to choose this tool over alternative convert methods (e.g., quote-based conversion vs. direct transfer).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_asset_dustBInspect
Dust Transfer (USER_DATA) — Convert dust assets to BNB. Weight(UID): 10 Returns: { totalServiceCharge: string, totalTransfered: string, transferResult: { amount: string, fromAsset: string, operateTime: number, serviceChargeAmount: string, tranId: number, transferedAmount: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | Yes | The asset being converted. For example, asset=BTC&asset=USDT | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| accountType | No | SPOT or MARGIN, default SPOT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting (Weight(UID): 10), authentication requirements (USER_DATA), and the complete return schema structure including fees and transfer details. However, it fails to mention that the conversion is irreversible, that fees are deducted from the conversion, or what constitutes eligible dust amounts.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the action ('Dust Transfer') and purpose, but the inline JSON return schema is verbose and cluttered. While the return structure is valuable information, its presentation as a dense JSON blob reduces readability. The metadata (Weight, USER_DATA) is efficiently packed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and lack of annotations, the description adequately covers the return value structure and rate limits. However, it is incomplete regarding sibling differentiation (no mention of post_sapi_v1_asset_dust_btc) and lacks domain context explaining dust thresholds or irreversibility, which are important for an agent to use this financial tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The main description text does not add additional parameter semantics beyond what the schema already provides (e.g., no clarification on the asset array format, timestamp requirements, or accountType behavior beyond the schema descriptions).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Convert dust assets to BNB' with a specific verb, resource, and destination asset. The mention of BNB helps distinguish it from the sibling tool post_sapi_v1_asset_dust_btc. However, it lacks explanation of what 'dust' assets are (typically small balances below trading minimums), which would help the agent recognize when to use this tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus the alternative post_sapi_v1_asset_dust_btc, nor any mention of prerequisites like minimum dust thresholds or account requirements. The description states what the tool does but not when to choose it over other asset conversion or transfer options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_asset_dust_btcAInspect
Get Assets That Can Be Converted Into BNB (USER_DATA) — Weight(IP): 1 Returns: { details: { asset: string, assetFullName: string, amountFree: string, toBTC: string, toBNB: string, toBNBOffExchange: string, ... }[], totalTransferBtc: string, totalTransferBNB: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| accountType | No | SPOT or MARGIN, default SPOT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit weight (Weight(IP): 1) and provides the complete return structure inline (critical since no output schema exists). However, it fails to explain the unusual POST-for-retrieval pattern or authentication requirements beyond the (USER_DATA) label.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is dense but efficiently front-loaded with purpose and weight information. The JSON return structure, while lengthy, is necessary given the absence of an output schema and provides essential value. No obvious waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description compensates effectively by documenting the return structure and rate limiting. For a 4-parameter query tool, this provides sufficient context for correct invocation despite missing behavioral edge case documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all 4 parameters documented). The description adds no additional parameter semantics beyond the schema, so it meets the baseline expectation when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves assets convertible to BNB with specific fields returned. It distinguishes from sibling `post_sapi_v1_asset_dust` (likely the conversion execution endpoint) by specifying this only gets/queries assets. However, the use of 'Get' verb conflicts slightly with the POST method implied by the tool name prefix.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling `post_sapi_v1_asset_dust` endpoint. Given both relate to dust conversion, explicit clarification that this is a query-only endpoint (not the conversion execution) is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_asset_get_funding_assetBInspect
Funding Wallet (USER_DATA) — - Currently supports querying the following business assets:Binance Pay, Binance Card, Binance Gift Card, Stock Token Weight(IP): 1
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | query parameter: asset (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| needBtcValuation | No | query parameter: needBtcValuation ("true" | "false") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant burden. It successfully discloses USER_DATA (authentication required), Weight(IP): 1 (rate limiting), and read-only nature via 'querying'. It also clarifies scope limitations (only specific business assets). Missing only error behavior and return structure details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with 'Funding Wallet' first, but formatting is inconsistent with '— -' mixed punctuation and 'Weight(IP): 1' appearing to be tacked onto the asset list rather than as separate metadata. Every sentence earns its place, but structure is poor.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description should ideally explain return values (balances, asset details), but it doesn't. It compensates partially by specifying which business assets are queryable, but remains incomplete regarding the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description provides context that parameters relate to 'Funding Wallet' but adds no specific semantics about timestamp/signature generation or needBtcValuation behavior beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool queries 'Funding Wallet' assets with specific verb 'querying' and distinguishes scope by listing supported business assets (Binance Pay, Card, Gift Card, Stock Token). However, the messy formatting with mixed dashes and lack of explicit return value description prevents a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus sibling tools like 'get_sapi_v1_asset_wallet_balance' or 'get_sapi_v1_capital_config_getall'. While it lists supported asset types, it doesn't indicate prerequisites or alternatives for unsupported assets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_asset_transferBInspect
User Universal Transfer (USER_DATA) — You need to enable Permits Universal Transfer option for the api key which requests this endpoint. - fromSymbol must be sent when type are ISOLATEDMARGIN_MARGIN and ISOLATEDMARGIN_ISOLATEDMARGIN - toSymbol must be sent when type are MARGIN_ISOLATEDMARGIN and ISOLATEDMARGIN_ISOLATEDMARGIN ENUM of transfer types: - MAIN_UMFUTURE Spot account transfer to USDⓈ-M Futures account - MAIN_CMFUTURE Spot account transfer to COIN-M Futures account - MAIN_MARGIN Spot account transfer to Margin(cross)account - UMFUTURE_MAIN USDⓈ-M Futures account transfer to Spot
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Universal transfer type | |
| asset | Yes | query parameter: asset (string) | |
| amount | Yes | query parameter: amount (number) | |
| toSymbol | No | Must be sent when type are MARGIN_ISOLATEDMARGIN and ISOLATEDMARGIN_ISOLATEDMARGIN | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| fromSymbol | No | Must be sent when type are ISOLATEDMARGIN_MARGIN and ISOLATEDMARGIN_ISOLATEDMARGIN | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses authentication requirement (USER_DATA) and specific API key permission needed. Explains conditional parameter logic. Missing: idempotency guarantees, error scenarios, whether transfers are reversible, or confirmation timing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Poorly structured text block with apparent formatting artifacts (dashes treated as bullets but rendered inline). The enum list appears truncated mid-documentation. Information density is high but presentation is messy, requiring parsing effort. The front-loaded title prefix is helpful, but the abrupt ending reduces clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given complexity (8 parameters, conditional logic, financial operation) and lack of output schema, description should provide complete enum list and response expectations. The enum list is clearly incomplete (ends suddenly), and there is no description of success/failure responses or transfer limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Critical value-add: schema has 100% coverage but lacks enum definitions for the 'type' parameter. Description provides explicit enum mappings (MAIN_UMFUTURE, MAIN_CMFUTURE, etc.) with human-readable explanations of what each transfer direction means. This is essential information not captured in the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a 'User Universal Transfer' operation between Binance account types (Spot, Futures, Margin). It distinguishes from sibling get_sapi_v1_asset_transfer by focusing on the transfer action. However, the incomplete enum list (truncated at UMFUTURE_MAIN) and poor formatting prevent a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides essential prerequisite: 'You need to enable Permits Universal Transfer option for the api key.' Documents conditional parameter requirements (fromSymbol/toSymbol dependencies). Lacks explicit guidance on when to use this versus sub-account transfers or other transfer endpoints in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_blvt_redeemBInspect
Redeem BLVT (USER_DATA) — Weight(IP): 1 Returns: { id: number, status: string, tokenName: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | query parameter: amount (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| tokenName | Yes | BTCDOWN, BTCUP | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and provides rate limiting (Weight(IP): 1), authentication type (USER_DATA), and return structure (id, status, tokenName). However, it omits critical behavioral traits: that this is a destructive mutation consuming tokens, whether it's reversible, and the financial side effects (receiving underlying assets).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely compact single-sentence format front-loaded with the action. Information density is high (action, auth type, weight, returns) with minimal filler. However, the dense parenthetical style slightly hinders readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial mutation tool with 5 parameters including cryptographic signature requirements, the description provides minimal business context. It mentions return fields but not their semantics (e.g., what 'status' values mean), omits error scenarios, and doesn't explain the redemption mechanism despite the complexity of leveraged token operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, documenting all 5 parameters including examples for tokenName (BTCDOWN, BTCUP) and recvWindow constraints. The description adds no parameter-specific guidance beyond the schema, meeting the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Redeem' and resource 'BLVT', distinguishing it from sibling 'post_sapi_v1_blvt_subscribe'. However, it fails to explain what BLVT stands for (Binance Leveraged Token) or what redeeming entails (exchanging tokens for underlying asset), leaving some domain ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus subscribing, when to use record-checking siblings like 'get_sapi_v1_blvt_redeem_record', or prerequisites such as needing sufficient BLVT balance. The USER_DATA label hints at authentication but doesn't specify signature requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_blvt_subscribeCInspect
Subscribe BLVT (USER_DATA) — Weight(IP): 1 Returns: { id: number, status: string, tokenName: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| cost | Yes | Spot balance | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| tokenName | Yes | BTCDOWN, BTCUP | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description includes API-specific behavioral cues: 'USER_DATA' implies authentication requirement, 'Weight(IP): 1' indicates rate limiting cost, and return structure is outlined. However, lacks disclosure of side effects (spot balance deduction), error conditions, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse single-sentence format with high information density. Technical metadata (weight, return types) is front-loaded. While efficient, the compression sacrifices clarity for users unfamiliar with Binance API conventions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a financial transaction tool with 5 parameters and no annotations. Missing critical context: relationship between cost parameter and spot balance, distinction from redemption, error scenarios, and business logic explanation. The return value mention partially compensates for lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (tokenName, cost, timestamp, signature, recvWindow all documented). The description adds no supplemental parameter guidance (e.g., explaining that 'cost' represents the spot balance amount to spend), meeting the baseline for fully documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the core action 'Subscribe BLVT' but provides no explanation of what BLVT (Binance Leveraged Tokens) are or what 'subscribe' means in this context (purchasing tokens). Fails to distinguish from sibling tool 'post_sapi_v1_blvt_redeem'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains no guidance on when to use this tool versus alternatives, prerequisites (beyond implied USER_DATA), or conditions for successful invocation. No mention of spot balance requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_bnb_burnAInspect
Toggle BNB Burn On Spot Trade And Margin Interest (USER_DATA) — - "spotBNBBurn" and "interestBNBBurn" should be sent at least one. Weight(IP): 1 Returns: { spotBNBBurn: boolean, interestBNBBurn: boolean }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| spotBNBBurn | No | Determines whether to use BNB to pay for trading fees on SPOT | |
| interestBNBBurn | No | Determines whether to use BNB to pay for margin loan's interest |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the rate limit weight ('Weight(IP): 1') and return structure ('Returns: { ... }'), which is valuable. However, it omits details about side effects, idempotency, or error conditions beyond the parameter constraint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but suffers from formatting artifacts ('— -') and run-on structure that merges the purpose, constraints, weight, and return value into a single block. Every sentence earns its place, but the presentation is messy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description adequately compensates by documenting the return object structure and rate limit weight. For a 5-parameter toggle operation with full schema coverage, the description provides sufficient context for correct use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% description coverage (baseline 3), the description adds crucial semantic information not captured in the schema: the validation rule that at least one of 'spotBNBBurn' or 'interestBNBBurn' must be provided. This constraint prevents invalid invocations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Toggle'), resource ('BNB Burn'), and specific contexts ('Spot Trade And Margin Interest'). It distinguishes itself from the sibling GET tool 'get_sapi_v1_bnb_burn' by using 'Toggle' (implying mutation) versus the read operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a specific parameter constraint ('spotBNBBurn' and 'interestBNBBurn' should be sent at least one), which aids correct invocation. However, it lacks broader guidance on when to use this POST endpoint versus the GET alternative or other account configuration tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_capital_contract_convertible_coinsBInspect
[DISCOVERY] Switch on/off BUSD and stable coins conversion (USER_DATA) (USER_DATA) — User can use it to turn on or turn off the BUSD auto-conversion from/to a specific stable coin. Weight(UID): 600'
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Must be USDC, USDP or TUSD | |
| enable | Yes | true: turn on the auto-conversion. false: turn off the auto-conversion |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the rate limit cost ('Weight(UID): 600') and classification ('USER_DATA'), and clearly indicates state mutation ('Switch', 'turn on/off'). However, it omits details on permission requirements, idempotency, or side effects of the toggle.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description front-loads the action but suffers from structural noise: the '[DISCOVERY]' prefix and duplicated '(USER_DATA)' tags are wasteful, and the weight information is appended awkwardly. While the core content is efficient, the meta-tags reduce clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 simple parameters, no nested objects, no output schema) and high schema coverage, the description provides sufficient context. It explains the conversion mechanism, rate limits, and user data classification adequately for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The schema fully documents the 'coin' (valid values) and 'enable' (boolean semantics) parameters. The description adds conceptual context that the coin represents the target stable coin in BUSD auto-conversion, but does not elaborate beyond the schema's existing documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the core action ('Switch on/off') and resource ('BUSD and stable coins conversion'), distinguishing it from the likely GET sibling 'get_sapi_v1_capital_contract_convertible_coins'. However, redundant metadata tags ([DISCOVERY], duplicated USER_DATA) slightly obscure the opening.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied by the explanation ('User can use it to turn on or turn off...'), providing clear context for when to toggle auto-conversion. However, it lacks explicit contrast with related tools or prerequisites (e.g., not mentioning that the GET version should be used to check current status first).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_capital_deposit_credit_applyBInspect
One click arrival deposit apply (USER_DATA) — Apply deposit credit for expired address (One click arrival) Weight(IP): 1 Returns: { code: string, message: string, data: boolean, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| txId | No | Deposit txId, used when depositId is not specified | |
| depositId | No | Deposit record Id, priority use | |
| signature | Yes | Signature | |
| subUserId | No | query parameter: subUserId (number) | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| subAccountId | No | query parameter: subAccountId (number) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses USER_DATA (authentication required), rate limit Weight(IP): 1, and return structure {code, message, data: boolean}, but lacks details on idempotency, error conditions, or what the boolean 'data' field signifies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description packs multiple information types (purpose, auth type, weight, returns) into one dense sentence. It suffers from slight redundancy ('One click arrival' appears twice) and would benefit from clearer separation of concerns, though it remains readable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 7 parameters, sub-account support, and specialized domain (expired address credit), the description adequately covers the basic operation but lacks business context about the deposit expiration flow, error scenarios, or sub-account delegation patterns that would help an agent use this correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal parameter context beyond the schema, though it implies that txId/depositId refer to expired deposit transactions. It does not clarify the distinction between subUserId and subAccountId or provide parameter usage examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool applies deposit credit for expired addresses using the 'one click arrival' feature, distinguishing it from general deposit or withdrawal operations. However, it assumes familiarity with what 'expired address' means in this context without defining the business scenario.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the use case ('for expired address') but provides no explicit guidance on when to use this versus querying deposit history or generating new addresses. No prerequisites or alternatives are mentioned despite having multiple sibling deposit tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_capital_withdraw_applyBInspect
Withdraw (USER_DATA) — Submit a withdraw request. - If network not send, return with default network of the coin. - You can get network and isDefault in networkList of a coin in the response of Get /sapi/v1/capital/config/getall (HMAC SHA256). Weight(IP): 1 Returns: { id: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Coin name | |
| name | No | query parameter: name (string) | |
| amount | Yes | query parameter: amount (number) | |
| address | Yes | query parameter: address (string) | |
| network | No | query parameter: network (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| addressTag | No | Secondary address identifier for coins like XRP,XMR etc. | |
| recvWindow | No | The value cannot be greater than 60000 | |
| walletType | No | The wallet type for withdraw,0-Spot wallet, 1- Funding wallet. Default is Spot wallet | |
| withdrawOrderId | No | Client id for withdraw | |
| transactionFeeFlag | No | When making internal transfer - `true` -> returning the fee to the destination account; - `false` -> returning the fee back to the departure account. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting (Weight(IP): 1), return format ({ id: string }), and default network behavior. However, it fails to mention critical safety aspects like irreversibility, fees, or that this triggers an actual blockchain transaction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise but structurally fragmented, mixing bullet points, technical metadata (Weight), and return values in a single block. While all information is relevant, better structuring would improve readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial operation with 12 parameters and no output schema, the description provides basic operational context (return shape, rate limits, prerequisites) but lacks safety warnings and detailed behavioral explanation expected for high-stakes mutation operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds marginal value by explaining the default network behavior when the `network` parameter is omitted, but does not significantly elaborate on other parameters like `transactionFeeFlag` or `walletType` beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Submit a withdraw request' with the specific resource (USER_DATA), and distinguishes itself from sibling GET tools like get_sapi_v1_capital_withdraw_history. It could be improved by specifying this is a cryptocurrency withdrawal from the exchange account.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a prerequisite workflow by referencing `Get /sapi/v1/capital/config/getall` to retrieve network information before calling this endpoint. However, it lacks explicit when-to-use/when-not-to-use guidance or warnings about irreversible financial transactions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_convert_accept_quoteBInspect
Accept Quote (TRADE) — Accept the offered quote by quote ID. Weight(UID): 500 Returns: { orderId: string, createTime: number, orderStatus: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| quoteId | Yes | query parameter: quoteId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully adds the rate limit weight ('Weight(UID): 500') and the return object structure since no output schema exists. However, it fails to disclose critical behavioral traits like whether the action is reversible, that it creates an order, or specific error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the core action. Information density is high without being verbose, though the return value documentation is somewhat crammed at the end. Every phrase earns its place, including the weight hint and return structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates by documenting the return fields. However, for a 4-parameter trading operation with no annotations, it should explain the two-step workflow (get quote → accept quote) and side effects (order creation) to be considered complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description mentions 'by quote ID' which aligns with the quoteId parameter but adds no additional semantic context (e.g., format constraints, where to obtain the ID) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Accept Quote'), the domain ('TRADE'), and the key resource ('quote ID'). It effectively distinguishes this tool from sibling tools like 'post_sapi_v1_convert_get_quote' by specifying this is the acceptance step, though it could explicitly mention this is for cryptocurrency conversion quotes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention the prerequisite workflow (e.g., that a quote must first be obtained via post_sapi_v1_convert_get_quote before acceptance). There are no exclusions or prerequisites stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_convert_get_quoteBInspect
Send quote request (USER_DATA) — Request a quote for the requested token pairs Weight(UID): 200 Returns: { quoteId: string, ratio: string, inverseRatio: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| toAsset | Yes | query parameter: toAsset (string) | |
| toAmount | No | When specified, it is the amount you will be debited after the conversion | |
| fromAsset | Yes | query parameter: fromAsset (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| validTime | No | 10s, 30s, 1m, 2m, default 10s | |
| fromAmount | No | When specified, it is the amount you will be debited after the conversion | |
| recvWindow | No | The value cannot be greater than 60000 | |
| walletType | No | SPOT or FUNDING. Default is SPOT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden. It adds valuable context: 'USER_DATA' tag indicates authentication requirements, 'Weight(UID): 200' discloses rate limiting costs, and it documents the return structure (quoteId, ratio). Missing: whether quotes are binding, if this locks funds, or expiration behavior beyond the validTime parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently packs purpose, authentication type, rate limit, and return value. Slightly dense with parenthetical technical details (Weight(UID)), but no wasted words. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters and complex financial logic, the description covers the return structure (compensating for missing output schema) and rate limits. However, it lacks workflow context (quote lifecycle) and does not explain that validTime determines quote expiration, leaving operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description does not add semantic context beyond the schema (e.g., explaining the relationship between fromAmount/toAmount mutual exclusivity or signature generation), but the schema is self-explanatory enough.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Send quote request') and resource ('requested token pairs'), with sufficient specificity to distinguish this as a quote retrieval operation. However, it does not explicitly differentiate from sibling 'post_sapi_v1_convert_accept_quote' (execution) or clarify the two-step workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus executing a conversion directly, or that this is a prerequisite to 'post_sapi_v1_convert_accept_quote'. Lacks prerequisites (e.g., requiring funds in specified walletType) or conditions for valid usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_convert_limit_cancel_orderBInspect
Cancel limit order (USER_DATA) — Enable users to cancel a limit order Weight(UID): 200 Returns: { orderId: number, status: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| orderId | Yes | query parameter: orderId (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It documents the return format ('Returns: { orderId, number, status: string }') and rate limit weight, but omits mutation safety details (idempotency, what happens if order already cancelled/filled) that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and information-dense, front-loading the operation type and including rate limit and return value data efficiently. Minor redundancy exists ('Cancel limit order... Enable users to cancel a limit order'), but every sentence contributes functional information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple flat parameter structure (4 primitives, no nesting) and absence of a structured output schema, the description adequately compensates by documenting the return object shape. It sufficiently covers the essentials for a single-purpose cancel operation, though the 'convert' service context remains implicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all 4 parameters (orderId, timestamp, signature, recvWindow). The description doesn't add semantic details beyond the schema (e.g., parameter relationships or formatting), warranting the baseline score for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Cancel) and resource (limit order), and the 'USER_DATA' tag indicates authenticated access. However, it doesn't explicitly clarify what distinguishes this 'convert' limit order cancellation from sibling cancel tools like delete_api_v3_order, leaving the 'convert' context implicit from the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides rate limit information ('Weight(UID): 200') which is useful for usage planning, but offers no guidance on when to select this endpoint versus the numerous alternative cancel operations (delete_api_v3_order, delete_sapi_v1_margin_order, etc.) available on this server.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_convert_limit_place_orderAInspect
Place limit order (USER_DATA) — Enable users to place a limit order - baseAsset or quoteAsset can be determined via exchangeInfo endpoint. - Limit price is defined from baseAsset to quoteAsset. - Either baseAmount or quoteAmount is used. Weight(UID): 500 Returns: { orderId: number, status: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| side | Yes | query parameter: side ("SELL" | "BUY") | |
| baseAsset | Yes | query parameter: baseAsset (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| baseAmount | No | Base asset amount. (One of baseAmount or quoteAmount is required) | |
| limitPrice | Yes | Symbol limit price (from baseAsset to quoteAsset) | |
| quoteAsset | Yes | query parameter: quoteAsset (string) | |
| recvWindow | No | The value cannot be greater than 60000 | |
| walletType | No | SPOT or FUNDING or SPOT_FUNDING. It is to use which type of assets. Default is SPOT. | |
| expiredType | No | 1_D, 3_D, 7_D, 30_D (D means day) | |
| quoteAmount | No | Quote asset amount. (One of baseAmount or quoteAmount is required) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses the rate limit ('Weight(UID): 500') and return structure ('Returns: { orderId: number, status: string }'). It marks the endpoint as USER_DATA, implying authentication requirements. It does not disclose error conditions or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but structurally inefficient. The opening sentence restates the tool name, and the bullet points use inconsistent formatting. While the technical details (rate limit, return value) are appropriately included, the redundancy and formatting issues prevent a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 11 parameters and complex trading domain, the description provides reasonable completeness: it covers purpose, parameter interdependencies, rate limits, and return values. With 100% schema coverage and no output schema field, the description successfully compensates by documenting the return structure inline.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds valuable semantic context: clarifying that assets can be determined via exchangeInfo, explaining the price direction (baseAsset to quoteAsset), and emphasizing the XOR relationship between baseAmount and quoteAmount parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it places a 'limit order' but fails to mention this is specifically for the 'convert' feature (evident only in the function name), which distinguishes it from the generic spot trading endpoint (post_api_v3_order) and instant convert endpoints. The opening 'Place limit order (USER_DATA) — Enable users to place a limit order' is tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides parameter constraints (either baseAmount or quoteAmount, price direction from base to quote) and references the exchangeInfo endpoint for asset discovery. However, it lacks explicit guidance on when to use this convert limit order versus standard spot orders or instant convert quotes (post_sapi_v1_convert_get_quote).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_dci_product_auto_compound_edit_statusAInspect
Change Auto-Compound status(USER_DATA) — Change Auto-Compound status - 15:31 ~ 16:00 UTC+8 This function is disabled Weight(IP): 1 Rate Limit: Maximum 1 time/s per account Returns: { positionId: string, autoCompoundPlan: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| positionId | Yes | Get positionId from /sapi/v1/dci/product/positions | |
| recvWindow | No | The value cannot be greater than 60000 | |
| autoCompoundPlan | Yes | NONE: switch off the plan, STANDARD: standard plan, ADVANCED: advanced plan; |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description adequately carries the burden by disclosing rate limits (Weight 1, 1 time/s), return structure ({ positionId, autoCompoundPlan }), and operational time constraints, though the 'disabled' status could be clearer.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from repetition ('Change Auto-Compound status' appears twice) and packs multiple concepts (purpose, time window, disabled status, rate limits, returns) into a dense run-on sentence without clear separation, though all information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations, it covers operational constraints and return structure adequately, but leaves ambiguity around the 'disabled' status and doesn't explain error conditions or prerequisites beyond the time window.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions the return fields but doesn't add semantic meaning to input parameters beyond what the schema already provides (e.g., no guidance on obtaining positionId beyond the schema's reference to the positions endpoint).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action (Change Auto-Compound status) and target resource (DCI product), though it redundantly repeats the phrase twice and includes noisy metadata (USER_DATA) without distinguishing from sibling tools like post_sapi_v1_dci_product_subscribe.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides a specific time window constraint (15:31 ~ 16:00 UTC+8) and mentions the function is disabled, but doesn't clarify whether this means it's only available during that window or disabled during it, nor does it suggest alternatives for operations outside this window.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_dci_product_subscribeAInspect
Subscribe Dual Investment products(USER_DATA) — Subscribe Dual Investment products - Products are not available. means that the APR changes to lower value, or the orders are not available. - Failed is a system or network errors. Weight(IP): 1 Returns: { positionId: number, investCoin: string, exercisedCoin: string, ... }.
PREREQUISITE: First call exchange info or symbol listing to discover valid trading pairs, then query market data.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | get id from /sapi/v1/dci/product/list | |
| orderId | Yes | get orderId from /sapi/v1/dci/product/list | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| depositAmount | Yes | query parameter: depositAmount (number) | |
| autoCompoundPlan | Yes | NONE: switch off the plan, STANDARD: standard plan, ADVANCED: advanced plan; |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and documents error states ('Products are not available' vs 'Failed'), rate limiting ('Weight(IP): 1'), and return value structure ('positionId: number, investCoin: string...'). This is comprehensive behavioral disclosure, though it lacks explicit statements about idempotency or destructive side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from redundancy in the opening sentence. However, the subsequent structure with error documentation and the explicit prerequisite is efficient. Every sentence after the first earns its place, but the repetition prevents a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter mutation tool with no output schema or annotations, the description is quite complete. It covers the purpose, prerequisites for valid usage, error interpretations, and return value hints. It successfully compensates for missing structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds value by referencing the prerequisite endpoint ('/sapi/v1/dci/product/list') which contextualizes the 'id' and 'orderId' parameters, but doesn't add semantic details beyond the comprehensive schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Subscribe') and specific resource ('Dual Investment products'), distinguishing it from sibling subscription tools like 'post_sapi_v1_blvt_subscribe'. However, the opening sentence is redundant ('Subscribe Dual Investment products... — Subscribe Dual Investment products'), which slightly reduces clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes an explicit 'PREREQUISITE' section stating that exchange info or symbol listing must be called first to discover valid trading pairs. This provides clear guidance on when and how to use the tool, though it doesn't explicitly mention alternative tools for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_eth_staking_eth_redeemAInspect
Redeem ETH (TRADE) — Redeem WBETH or BETH and get ETH - You need to open Enable Spot & Margin Trading permission for the API Key which requests this endpoint. Weight(IP): 150 Returns: { success: boolean, arrivalTime: number, ethAmount: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| asset | No | WBETH or BETH, default to BETH | |
| amount | Yes | Amount in BETH, limit 8 decimals | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully communicates authentication requirements (Spot & Margin Trading permission), rate limiting (Weight(IP): 150), and return value structure (success, arrivalTime, ethAmount). It omits explicit mention of the destructive nature (consumption of WBETH/BETH tokens) and error scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently packs multiple critical details (purpose, permissions, rate limits, return shape) into a single sentence structure. While the dash-separated formatting is slightly dense, every clause provides distinct value with no redundant filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial mutation operation with no output schema, the description provides comprehensive operational context including permission requirements and response structure. It could be improved by explicitly stating the irreversible destruction of source tokens and typical error scenarios, but covers the essential execution context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 5 parameters (asset options, amount precision, timestamp format, etc.). The description confirms the 'Amount in BETH' unit but adds no additional semantic context beyond what the schema provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Redeem WBETH or BETH and get ETH'), identifying the specific input assets and output. It effectively distinguishes from sibling staking tools (e.g., post_sapi_v2_eth_staking_eth_stake) by specifying the redemption direction (wrapped assets to ETH).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a critical prerequisite ('You need to open Enable Spot & Margin Trading permission'), which constrains when the tool can be used. However, it lacks explicit guidance on when to choose this over staking alternatives or redemption history queries, and omits error condition guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_eth_staking_wbeth_wrapBInspect
Wrap BETH(TRADE) — - You need to open Enable Spot & Margin Trading permission for the API Key which requests this endpoint. Weight(IP): 150 Returns: { success: boolean, wbethAmount: string, exchangeRate: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount in BETH, limit 4 decimals | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses rate limiting ('Weight(IP): 150') and return structure ('Returns: { ... }'), plus authentication requirements. However, it omits mutation semantics, reversibility, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is densely packed but formatting is awkward ('— -' punctuation) and sentences run together. While it efficiently combines purpose, permissions, rate limits, and returns, the structure hinders readability and could be better segmented.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial mutation tool with 4 parameters, the description covers return values (inline) and rate limits, but lacks domain context explaining what wrapping entails, its implications, or how it fits into the ETH staking workflow. Adequate but missing conceptual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (amount, signature, timestamp, recvWindow all documented). The description adds no parameter-specific semantics, but since schema coverage is complete, it meets the baseline expectation without needing compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the action 'Wrap' and resource 'BETH', but uses unexplained jargon '(TRADE)' and does not distinguish from sibling operations like staking or redeeming (e.g., post_sapi_v1_eth_staking_eth_redeem). The return field 'wbethAmount' hints at the output, but the description doesn't explicitly clarify that BETH is converted to WBETH.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite guidance regarding API key permissions ('Enable Spot & Margin Trading'), but lacks guidance on when to use wrapping versus other ETH staking operations (e.g., when to stake vs. wrap) and does not mention the existence of the unwrap counterpart tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_futures_transferAInspect
New Future Account Transfer (USER_DATA) — Execute transfer between spot account and futures account. Weight(IP): 1 Returns: { tranId: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | 1: transfer from spot account to USDT-Ⓜ futures account. 2: transfer from USDT-Ⓜ futures account to spot account. 3: transfer from spot account to COIN-Ⓜ futures account. 4: transfer from COIN-Ⓜ futures account to spot account. | |
| asset | Yes | query parameter: asset (string) | |
| amount | Yes | query parameter: amount (number) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses: (1) authentication level via 'USER_DATA', (2) rate limiting via 'Weight(IP): 1', and (3) return structure '{ tranId: number }'. It does not disclose idempotency, reversibility, or error conditions, but covers the critical behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every element earns its place: API endpoint classification ('New Future Account Transfer'), security context ('USER_DATA'), action ('Execute transfer'), rate limit ('Weight(IP): 1'), and return value ('Returns: { tranId: number }'). No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description compensates by specifying the return format. It addresses the lack of annotations by including USER_DATA classification and rate limit weight. With 100% schema parameter coverage and these textual additions, the definition is sufficiently complete for a financial transfer operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'type' parameter fully enumerated (1-4) and other parameters adequately described. The description adds no parameter-specific semantics, but the baseline score of 3 is appropriate when schema coverage is comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Execute transfer between spot account and futures account' with specific verb and resources. It distinguishes from siblings like get_sapi_v1_futures_transfer (query) and post_sapi_v1_sub_account_futures_transfer (sub-account scope) by specifying this is for the user's own account transfer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the phrase 'Execute transfer between spot account and futures account' and the directional enum documentation in the schema, but lacks explicit when-to-use guidance or comparison to alternatives like the sub-account transfer variant.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_giftcard_buy_codeAInspect
Buy a Binance Code (TRADE) — This API is for buying a fixed-value Binance Code, which means your Binance Code will be redeemable to a token that is different to the token that you are paying in. If the token you’re paying and the redeemable token are the same, please use the Create Binance Code endpoint. You can use supported crypto currency or fiat token as baseToken to buy Binance Code that is redeemable to your chosen faceToken. Once successfully purchased, the amount of baseToken would be deducted from your funding wallet. To get started with, please make sure: - You have a Binance account
| Name | Required | Description | Default |
|---|---|---|---|
| baseToken | Yes | The token you want to pay, example BUSD | |
| faceToken | Yes | The token you want to buy, example BNB. If faceToken = baseToken, it's the same as createCode endpoint. | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| baseTokenAmount | Yes | The base token asset quantity, example 1.002 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses key financial behavior ('amount of baseToken would be deducted from your funding wallet') and identifies this as a TRADE operation. However, lacks mention of idempotency, retry safety, or rate limits which would be important for a financial mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Contains necessary information but structure could be improved. The prerequisite statement ('To get started with, please make sure: - You have a Binance account') awkwardly mixes prose with bullet formatting. Information is front-loaded with the main purpose, but the alternative endpoint mention is buried mid-text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists and this is a purchase operation, the description should ideally explain what is returned (the code value, a reference ID, etc.) which is missing. Prerequisites and financial impact are covered, but return value documentation is a significant gap for a buy operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds conceptual context explaining the relationship between baseToken and faceToken (payment vs redeemable currencies) and the logic of cross-token purchasing that pure parameter descriptions don't convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Buy') and resource ('Binance Code'). Explicitly distinguishes from sibling tool 'post_sapi_v1_giftcard_create_code' by stating when to use the alternative ('If the token you're paying and the redeemable token are the same, please use the Create Binance Code endpoint').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance by naming the alternative endpoint for same-token scenarios. Also includes prerequisite guidance ('You have a Binance account') and explains the core condition for using this tool (different baseToken vs faceToken).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_giftcard_create_codeAInspect
Create a Binance Code (USER_DATA) — This API is for creating a Binance Code. To get started with, please make sure: - You have a Binance account - You have passed kyc - You have a sufficient balance in your Binance funding wallet - You need Enable Withdrawals for the API Key which requests this endpoint. Daily creation volume: 2 BTC / 24H Daily creation times: 200 Codes / 24H Weight(IP): 1 Returns: { code: string, message: string, data: { referenceNo: string, code: string, expiredTime: number }, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | The coin type contained in the Binance Code | |
| amount | Yes | The amount of the coin | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden effectively. It discloses authentication requirements (Enable Withdrawals), rate limiting (Weight(IP): 1, daily limits), and return structure (JSON format with referenceNo, code, expiredTime). Missing explicit mention that this operation debits the user's funding wallet.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the essential action but contains redundancy ('Create... — This API is for creating...'). The bullet-point prerequisites and inline JSON return structure are informative but create a dense, somewhat messy text block that could be better structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial operation, the description adequately covers prerequisites, rate limits, and return values (compensating for the lack of structured output schema). It appropriately omits exhaustive error codes but could improve by mentioning the side effect of balance deduction from the funding wallet.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description adds contextual value by mentioning the 'funding wallet' requirement, which helps agents understand the source of 'token' and 'amount' parameters, but doesn't elaborate on parameter formats beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a Binance Code with specific verb ('Create') and resource ('Binance Code'). However, it doesn't explicitly define what a 'Binance Code' is (a crypto gift card/voucher) or clearly distinguish from the sibling 'buy_code' operation, though the funding wallet prerequisite hints at the distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides strong prerequisites (KYC, funding wallet balance, API withdrawal permissions) and rate limits (2 BTC/24H, 200 codes/24H) that effectively constrain when to use the tool. Lacks explicit comparison to sibling giftcard operations (e.g., 'use buy_code instead to purchase with fiat').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_giftcard_redeem_codeAInspect
Redeem a Binance Code (USER_DATA) — This API is for redeeming the Binance Code. Once redeemed, the coins will be deposited in your funding wallet. Please note that if you enter the wrong code 5 times within 24 hours, you will no longer be able to redeem any Binance Code that day. Weight(IP): 1 Returns: { code: string, message: string, data: { token: string, amount: string, referenceNo: string, identityNo: string }, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | Binance Code | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| externalUid | No | Each external unique ID represents a unique user on the partner platform. The function helps you to identify the redemption behavior of different users, such as redemption frequency and amount. It also helps risk and limit control of a single account, such as daily limit on redemption volume, frequency, and incorrect number of entries. This will also prevent a single user account reach the partner's daily redemption limits. We strongly recommend you to use this feature and transfer us the User ID of your users if you have different users redeeming Binance codes on your platform. To protect user data privacy, you may choose to transfer the user id in any desired format (max. 400 characters). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and delivers: rate limit ('Weight(IP): 1'), account lockout behavior, destination wallet, and complete return schema structure. Missing explicit idempotency or reversibility statements, but covers critical operational risks.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement followed by behavioral constraints and return format. Information-dense with minimal waste, though the inline JSON return structure is slightly cluttered. All sentences earn their place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial mutation tool with no annotations, description adequately covers: authentication hint (USER_DATA), side effects (funding wallet deposit), rate limits, error handling (5-strike lockout), and response format. No significant gaps given the schema completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (all 5 parameters documented). Description mentions 'wrong code' referencing the code parameter but adds no semantic detail beyond the schema. Baseline 3 appropriate since schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Redeem' and resource 'Binance Code', clearly distinguishing from sibling tools like post_sapi_v1_giftcard_create_code and post_sapi_v1_giftcard_buy_code. The '(USER_DATA)' tag immediately signals authentication requirements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit operational constraints ('if you enter the wrong code 5 times within 24 hours, you will no longer be able to redeem') and destination context ('deposited in your funding wallet'). Lacks explicit comparison to alternatives like verify or create, but the lockout warning provides concrete 'when-not' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_lending_auto_invest_one_offCInspect
One Time Transaction(TRADE) — One time transaction Weight(IP): 1 Returns: { transactionId: number, waitSecond: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| planId | No | query parameter: planId (number) | |
| details | No | query parameter: details ({ targetAsset: string, percentage: number }[]) | |
| indexId | No | query parameter: indexId (number) | |
| requestId | No | query parameter: requestId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| sourceType | Yes | query parameter: sourceType (string) | |
| sourceAsset | Yes | query parameter: sourceAsset (string) | |
| subscriptionAmount | Yes | query parameter: subscriptionAmount (number) | |
| flexibleAllowedToUse | No | query parameter: flexibleAllowedToUse (boolean) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description partially discloses behavioral traits by specifying the rate limit 'Weight(IP): 1' and return structure '{ transactionId: number, waitSecond: number }'. However, it critically fails to describe mutation side effects (fund deduction timing, asset allocation logic, irreversibility) or error conditions for this POST operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief, the description suffers from redundant phrasing ('One Time Transaction' immediately followed by 'One time transaction') and poor structure that jumbles purpose, technical weight metadata, and return types into a single run-on sentence without logical separation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 11 parameters and no formal output schema (which the description partially compensates for by documenting returns), the description lacks critical business logic context. It does not explain how the one-time investment executes, the relationship between planId/indexId and the transaction, or fund flow mechanics for the sourceAsset.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 11 parameters including complex fields like 'details' (targetAsset/percentage array) and required authentication fields. The description adds no parameter-specific guidance, meeting the baseline threshold where schema documentation is comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'One Time Transaction(TRADE)' indicating a single transactional operation, which distinguishes it from sibling plan-management tools like post_sapi_v1_lending_auto_invest_plan_add. However, the '(TRADE)' parenthetical is ambiguous, and the description fails to clearly articulate the auto-investment domain or explain what 'transaction' means in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus recurring plan investments (post_sapi_v1_lending_auto_invest_plan_add) or redemptions (post_sapi_v1_lending_auto_invest_redeem). It omits prerequisites such as existing plan requirements, minimum balance checks, or asset availability constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_lending_auto_invest_plan_addCInspect
Investment plan creation (USER_DATA) — Post an investment plan creation Weight(IP): 1 Returns: { planId: number, nextExecutionDateTime: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| IndexId | No | query parameter: IndexId (number) | |
| details | Yes | query parameter: details ({ targetAsset: string, percentage: number }[]) | |
| planType | Yes | query parameter: planType ("SINGLE" | "PORTFOLIO" | "INDEX") | |
| requestId | No | query parameter: requestId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| sourceType | Yes | query parameter: sourceType ("MAIN_SITE" | "TR") | |
| sourceAsset | Yes | query parameter: sourceAsset (string) | |
| subscriptionCycle | Yes | query parameter: subscriptionCycle ("H1" | "H4" | "H8" | "H12" | "WEEKLY" | "DAILY" | "MONTHLY" | "BI_WEEKLY") | |
| subscriptionAmount | Yes | query parameter: subscriptionAmount (number) | |
| flexibleAllowedToUse | No | query parameter: flexibleAllowedToUse (boolean) | |
| subscriptionStartDay | No | query parameter: subscriptionStartDay (number) | |
| subscriptionStartTime | Yes | query parameter: subscriptionStartTime (number) | |
| subscriptionStartWeekday | No | query parameter: subscriptionStartWeekday ("MON" | "TUE" | "WED" | "THU" | "FRI" | "SAT" | "SUN") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses rate limiting ('Weight(IP): 1') and return values ('planId', 'nextExecutionDateTime'), which is helpful. However, it omits critical behavioral details such as whether the plan begins executing immediately, idempotency concerns, or error conditions for insufficient funds.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description packs relevant information (purpose, auth type, weight, returns) into a single sentence, but suffers from redundant phrasing ('Investment plan creation' ... 'Post an investment plan creation') and lacks clear sentence boundaries or structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial tool with 15 parameters creating recurring investment plans, the description is insufficient. It lacks explanation of plan types (SINGLE vs PORTFOLIO), subscription cycle mechanics, or the relationship between sourceAsset and details parameters. The return value mention partially compensates for missing output schema, but business logic context is absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all 15 parameters including enums and types. The description adds no parameter-specific context beyond what the schema already provides, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Investment plan creation' which identifies the verb and resource, but the phrase 'Post an investment plan creation' is tautological and awkwardly repetitive. It fails to distinguish this tool from siblings like post_sapi_v1_lending_auto_invest_one_off or post_sapi_v1_lending_auto_invest_plan_edit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as one-off investments or plan editing. The only implicit guidance is '(USER_DATA)' indicating authentication is required, but no explicit prerequisites, conditions, or workflow context is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_lending_auto_invest_plan_editCInspect
Investment plan adjustment — Query Source Asset to be used for investment Weight(IP): 1 Returns: { planId: number, nextExecutionDateTime: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| planId | Yes | query parameter: planId (number) | |
| details | No | query parameter: details ({ targetAsset: string, percentage: number }[]) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| sourceAsset | Yes | query parameter: sourceAsset (string) | |
| subscriptionCycle | Yes | query parameter: subscriptionCycle ("H1" | "H4" | "H8" | "H12" | "WEEKLY" | "DAILY" | "MONTHLY" | "BI_WEEKLY") | |
| subscriptionAmount | Yes | query parameter: subscriptionAmount (number) | |
| flexibleAllowedToUse | No | query parameter: flexibleAllowedToUse (boolean) | |
| subscriptionStartDay | No | query parameter: subscriptionStartDay (number) | |
| subscriptionStartTime | Yes | query parameter: subscriptionStartTime (number) | |
| subscriptionStartWeekday | No | query parameter: subscriptionStartWeekday ("MON" | "TUE" | "WED" | "THU" | "FRI" | "SAT" | "SUN") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions return values (planId, nextExecutionDateTime) but fails to disclose mutation side effects, idempotency, required permissions, or what happens to existing subscriptions when edited.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence attempt is compact but poorly structured with fragmented metadata ('Weight(IP): 1') and return type syntax crowding the description. Front-loading is weak due to ambiguous 'Query' terminology appearing early.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a complex financial operation with 12 parameters including cryptographic signature requirements. Lacks explanation of the editing scope (which fields are mutable), error scenarios, or the impact on scheduled executions beyond the return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline adequacy. Description adds minimal context beyond schema, only noting 'Source Asset to be used for investment' which maps to the sourceAsset parameter. Does not explain the relationship between subscriptionCycle, subscriptionStartTime, and subscriptionStartWeekday.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'Investment plan adjustment' identifying the core action and resource, but introduces confusion with 'Query Source Asset' phrasing that contradicts the POST/edit nature of the endpoint. Contains irrelevant metadata 'Weight(IP): 1' that obscures the purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus siblings like 'post_sapi_v1_lending_auto_invest_plan_add' or 'post_sapi_v1_lending_auto_invest_plan_edit_status'. Does not clarify prerequisites such as existing plan requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_lending_auto_invest_plan_edit_statusCInspect
Change Plan Status — Change Plan Status Weight(IP): 1 Returns: { planId: number, nextExecutionDateTime: number, status: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| planId | Yes | query parameter: planId (number) | |
| status | Yes | query parameter: status ("ONGOING" | "PAUSED" | "REMOVED") | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the return structure ({planId, nextExecutionDateTime, status}) and rate limit weight (IP: 1), but fails to clarify mutation side effects, reversibility, or authentication requirements implied by the signature/timestamp parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The repetition of 'Change Plan Status' is wasteful. While brief overall, the redundant phrasing and awkward insertion of 'Weight(IP): 1' and return values in the same sentence reduces clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter mutation tool with no annotations and no output schema, the description compensates partially by documenting the return structure inline. However, it lacks critical operational context such as error conditions, state machine constraints, or the authentication pattern required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage including enum values for status ('ONGOING' | 'PAUSED' | 'REMOVED'). The description adds no parameter-specific context, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the core purpose ('Change Plan Status') but repeats it redundantly ('Change Plan Status — Change Plan Status'). Does not distinguish from sibling tool 'post_sapi_v1_lending_auto_invest_plan_edit' which likely modifies other plan attributes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this status-specific endpoint versus the general plan edit endpoint, nor any prerequisites like existing plan requirements or valid state transitions between ONGOING/PAUSED/REMOVED.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_lending_auto_invest_redeemBInspect
Index Linked Plan Redemption (TRADE) — To redeem index-Linked plan holdings Weight(IP): 1 Returns: { redemptionId: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| indexId | Yes | PORTFOLIO plan's Id | |
| requestId | No | sourceType + unique, transactionId and requestId cannot be empty at the same time | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 | |
| redemptionPercentage | Yes | user redeem percentage,10/20/100. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It partially succeeds by labeling the operation as '(TRADE)' and disclosing the return value '{ redemptionId: number }' (compensating for the lack of output schema). However, it fails to disclose whether the operation is reversible, idempotent, or what specific side effects occur to the underlying holdings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the operation type and purpose, which is good. However, it includes 'Weight(IP): 1' which appears to be internal metadata noise rather than helpful documentation, slightly detracting from an otherwise efficient structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial mutation tool with six parameters and no annotations, the description is minimally viable. It compensates for the missing output schema by documenting the return type, but lacks critical context about the redemption mechanics, error conditions, or confirmation requirements typical for portfolio-modifying operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline score applies. The description mentions 'Index Linked Plan' which contextually maps to the indexId parameter, but does not add syntax details, validation rules, or semantic meaning beyond what the schema already provides for the six parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool redeems 'index-Linked plan holdings' with a specific verb and resource. It distinguishes itself from sibling redemption tools (e.g., post_sapi_v1_blvt_redeem, post_sapi_v1_simple_earn_flexible_redeem) by specifying 'Index Linked Plan'. However, the inclusion of 'Weight(IP): 1' appears to be implementation metadata leakage that doesn't aid comprehension.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternative redemption endpoints (such as the various other post_sapi_v1_*_redeem tools). There are no prerequisites, conditions, or 'when-not-to-use' indicators provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_lending_customized_fixed_purchaseBInspect
Purchase Fixed/Activity Project (USER_DATA) — Weight(IP): 1 Returns: { purchaseId: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| lot | Yes | query parameter: lot (string) | |
| projectId | Yes | query parameter: projectId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Lacking annotations, the description carries full burden. It adds rate limit ('Weight(IP): 1') and return structure ('{ purchaseId: string }'), which is valuable since no output schema exists. However, it omits critical behavioral details for a financial mutation: fund locking, project duration implications, or idempotency guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence front-loaded with action. Efficiently packs purpose, auth type, rate limit, and return value, though the em-dash separation makes it slightly cramped rather than ideally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With full schema coverage and no annotations, the description adequately covers the return value gap. However, for a purchasing tool, it lacks operational context (e.g., linking to get_sapi_v1_lending_project_position_list for verification) or business logic explanation of fixed-term commitments.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting all 5 parameters including their types and locations (query vs body). The description adds no supplemental semantics for 'lot' (units/amount) or 'projectId', relying entirely on the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Purchase' and resource 'Fixed/Activity Project', distinguishing it from sibling auto-invest or simple-earn tools. The '(USER_DATA)' tag indicates authentication scope. Minor clarity gap: 'Activity Project' uses domain jargon without explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like post_sapi_v1_lending_auto_invest_one_off or post_sapi_v1_simple_earn_locked_subscribe. No prerequisites (e.g., fund availability) or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_lending_position_changedBInspect
Change Fixed/Activity Position to Daily Position (USER_DATA) — - PositionId is mandatory parameter for fixed position. Weight(IP): 1 Returns: { dailyPurchaseId: number, success: boolean, time: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| lot | Yes | query parameter: lot (string) | |
| projectId | Yes | query parameter: projectId (string) | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| positionId | No | query parameter: positionId (string) | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Includes rate limit weight (Weight(IP): 1) and return structure shape, but misses safety profile (destructive vs. safe), error conditions, or idempotency characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with three distinct sections (action, constraint, metadata), but awkward punctuation ('— -') hinders readability. All sentences earn their place, though formatting could be cleaner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides return type documentation inline since no output schema exists, covering the 6-parameter operation adequately. However, lacks explanation of what 'Daily Position' entails or side effects on existing positions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3), but description adds critical semantic context: positionId is mandatory specifically for fixed positions. This business logic is not evident from schema descriptions alone, which only list parameter types mechanically.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Change) and resource types (Fixed/Activity Position to Daily Position) with USER_DATA tag indicating authentication scope. Slightly awkward phrasing ('Position to Daily Position') prevents a 5, but distinguishes from sibling lending tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific constraint that 'PositionId is mandatory parameter for fixed position,' which guides usage. However, lacks explicit when-not-to-use guidance or alternative tools for other position types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_loan_adjust_ltvBInspect
Crypto Loan Adjust LTV (TRADE) — Weight(UID): 6000 Returns: { loanCoin: string, collateralCoin: string, direction: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount | |
| orderId | Yes | Order ID | |
| direction | Yes | 'ADDITIONAL', 'REDUCED' | |
| signature | Yes | Signature | |
| timestamp | Yes | UTC timestamp in ms | |
| recvWindow | No | The value cannot be greater than 60000 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the Weight(UID): 6000 (rate limit cost) and partial return structure, which is valuable since no output schema exists. However, it omits mutation confirmation details, error conditions (e.g., insufficient collateral), or side effects like potential liquidation triggers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence packs the operation type, weight, and return structure without fluff. However, it could benefit from structure (e.g., separating the return value documentation from the purpose statement) for better readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter financial mutation tool with no annotations, the description provides the return shape and rate limit weight, but lacks operational context. It should mention that orderId refers to an existing loan from ongoing orders, and clarify the authentication requirements implied by the signature parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 6 parameters (amount, orderId, direction, etc.). The description adds the return structure (loanCoin, collateralCoin, direction) which compensates somewhat for the missing output schema, meeting the baseline for comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the specific action (Adjust LTV) and resource (Crypto Loan) clearly. The '(TRADE)' tag indicates it's a trading operation. However, it doesn't distinguish from the similar 'post_sapi_v2_loan_flexible_adjust_ltv' tool for flexible loans, nor explain what LTV adjustment means in practice (adding/removing collateral).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus alternatives (e.g., the flexible loan variant). Doesn't mention prerequisites like obtaining the orderId from ongoing orders first, or explain the business logic of when to use ADDITIONAL vs REDUCED direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_sapi_v1_loan_borrowBInspect
Crypto Loan Borrow (TRADE) — Weight(UID): 6000 Returns: { loanCoin: string, loanAmount: string, collateralCoin: string, ... }.