Skip to main content
Glama
nicky512500

MAX Exchange MCP Server

by nicky512500

Server Quality Checklist

58%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools are generally well-distinguished by resource type and action (e.g., singular vs. plural, spot vs. M-wallet prefixes). Minor potential confusion exists between get_order and get_open_orders/get_closed_orders, but parameter differences (ID lookup vs. state filtering) clarify intent.

    Naming Consistency4/5

    Excellent snake_case consistency with clear verb_noun patterns (get_*, submit_*, cancel_*). All margin wallet tools use the m_ prefix predictably. Minor deduction for get_k being cryptic (should be get_klines or get_candles) and get_m_ad_ratio using unexplained abbreviation.

    Tool Count2/5

    With 45 tools, the surface far exceeds the rubric's 25+ threshold for 'too many.' The count is inflated by numerous singular/plural pairs (get_order/get_orders, get_deposit/get_deposits) that could have been consolidated with optional parameters, creating unnecessary bloat for an agent to navigate.

    Completeness4/5

    Extremely comprehensive coverage of exchange operations including spot trading, margin (M-wallet) lifecycle, deposits/withdrawals, and market data. Minor gaps exist (e.g., no submit_internal_transfer to match get_internal_transfers, no cancel_withdrawal), but core workflows are fully supported.

  • Average 3.4/5 across 45 of 45 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 45 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. While '查詢' (query) implies a read-only operation, the description does not confirm idempotency, safety, error handling (e.g., invalid UUID), or authentication requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise with no wasted words. The main purpose appears first, followed by parameter documentation. However, the use of Python-style docstring markup (:param) is slightly unconventional for MCP tool descriptions.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has only one parameter and an output schema exists (reducing the need to describe return values), the description is minimally adequate. However, with zero schema descriptions and no annotations, it lacks context on error states, UUID format expectations, and differentiation from the batch retrieval sibling.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 0% description coverage. The description compensates by including ':param uuid: 提現 UUID' which identifies the parameter as the withdrawal UUID, but uses docstring syntax rather than natural language. It does not provide format validation rules, examples, or constraints beyond the basic identifier.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool queries a single withdrawal's details (查詢單筆提現詳情), distinguishing it from the plural 'get_withdrawals' via the word '單筆' (single). However, it does not explicitly reference sibling tools to clarify when to use this specific endpoint versus the list endpoint.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance is provided on when to use this tool versus alternatives like 'get_withdrawals' (for listing) or 'submit_withdrawal' (for creating). There are no prerequisites mentioned, such as needing a valid UUID from a previous withdrawal.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention critical financial operation details: whether this creates an immediate debt position, requires pre-existing collateral, interest accrual behavior, or idempotency concerns for the submit action.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely compact with zero filler text. The :param docstring format is functional but slightly unconventional for MCP descriptions; front-loading the param examples directly into the main description text rather than using structured docstrings might improve parseability.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a high-risk financial mutation tool. Despite having an output schema (reducing the need to describe returns), the description omits critical context: relationship to 'get_m_limits' or 'get_m_ad_ratio' siblings for pre-validation, error conditions (insufficient collateral), and the irreversible nature of loan initiation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Compensates effectively for 0% schema coverage by providing concrete examples for both parameters ('usdt', 'btc' for currency; '1000' for amount) and clarifying that amount expects a string type. Does not document valid currency enumerations, precision constraints, or minimum/maximum thresholds.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool applies for a loan from 'M Wallet' (向 M 錢包申請借貸), providing a specific verb and resource. However, it does not explicitly distinguish this creation action from sibling 'get_m_loans' (retrieval) or 'submit_m_repayment' (repayment), though the verb 'apply' implies creation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus alternatives like 'get_m_loans' for checking existing positions. No mention of prerequisites such as collateral requirements, account verification, or available credit limits before invoking.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'cancel' implies mutation, the description lacks details about side effects (e.g., whether cancellations are immediate or batched), error handling (what if no orders exist), idempotency, or permission requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is compact and uses structured Sphinx-style parameter documentation. Every sentence serves a purpose—defining the operation or explaining parameter semantics. The Chinese text is appropriately concise for the domain.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema (per context signals), the description appropriately omits return value details. All three parameters are documented, compensating for the schema's lack of descriptions. However, it misses the opportunity to clarify the relationship with the singular 'cancel_order' sibling, leaving potential ambiguity for the agent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates effectively via inline :param documentation. It specifies valid enum values for 'wallet_type' ('spot' or 'm') and 'side' ('buy' or 'sell'), notes defaults ('spot', null), and provides an example for 'market' ('btcusdt'), adding substantial meaning absent from the raw schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states it performs '批次取消' (batch cancellation) of orders ('掛單') for a specific market, using a specific verb-resource combination. The term '批次' (batch) implicitly distinguishes it from the sibling 'cancel_order' (singular), though it could explicitly clarify this distinction.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus the sibling 'cancel_order', nor does it mention prerequisites (e.g., requiring open orders) or exclusions. It only documents parameters without contextual usage rules.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full disclosure burden. It fails to mention that this is a read-only operation, whether data is real-time or cached, or any authorization requirements. The description only states what data is retrieved, not behavioral characteristics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise with zero filler. Uses standard docstring format (:param) efficiently. Purpose statement is front-loaded in the first line, followed by parameter specifics. Slightly informal embedding of param docs prevents a perfect 5.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the existence of an output schema and only two simple parameters (both documented), the description is minimally adequate. However, it lacks any mention of the return structure or what constitutes a 'complete' result set, which would help given the 'various currencies' scope.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description effectively compensates by documenting both parameters: wallet_type explains the 'spot' vs 'm' options and default value, while currency clarifies the specific-vs-all filtering behavior with an example ('btc'). Minor gap: 'm' wallet type is not explained (though siblings suggest margin wallet).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description clearly states the tool retrieves account balances for various currencies (取得帳戶各幣種餘額). The term '餘額' (balance) effectively distinguishes it from transaction-oriented siblings like get_deposits or get_withdrawals, though it doesn't explicitly contrast with get_user_info.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this tool versus alternatives. While the parameter descriptions imply wallet selection logic (spot vs M wallet), there is no mention of prerequisites, rate limits, or when to prefer this over other account-related tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. While '查詢' (query) implies a read-only operation, the description lacks disclosure about error handling (e.g., invalid UUID), authentication requirements, or rate limiting. No behavioral traits beyond the basic operation are documented.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely terse with no wasted words. However, the structure mixes natural language with Sphinx-style parameter documentation in a single string, which is slightly awkward but functional.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple retrieval-by-ID tool with one parameter and an existing output schema, the description covers the basics adequately. However, it lacks contextual guidance on the conversion lifecycle and differentiation from the plural 'get_converts', preventing a higher score.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates by documenting the single parameter ':param uuid: 兌換單 UUID', which clarifies that the UUID refers specifically to the exchange/conversion order. This adds essential semantic meaning that the schema lacks.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description '查詢單筆閃兌詳情' (Query single flash exchange/conversion details) provides a specific verb and resource. The term '單筆' (single) implicitly distinguishes this from the sibling tool 'get_converts' (plural), though explicit differentiation is not stated in the text.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus siblings like 'get_converts' (list) or 'submit_convert' (create). There are no prerequisites, error conditions, or workflow recommendations mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a safe read-only operation, idempotent, or if there are rate limits. It only states what data is retrieved without explaining side effects or safety properties.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is compact with no wasted words, consisting of a single purpose statement followed by two parameter documentation lines. While efficient, the use of Sphinx-style :param: directives in an MCP description field is slightly unconventional though functional.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple list retrieval tool with an output schema available, the description covers the basic functionality and parameters. However, it lacks safety classification (critical given no annotations), pagination strategy guidance, and explicit differentiation from the singular 'get_convert' tool, leaving contextual gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates adequately by documenting both parameters using docstring-style notation: limit is defined as '每頁筆數' (items per page) and page as '頁碼' (page number), including their defaults. This provides necessary semantic meaning absent from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves '閃兌歷史記錄' (flash exchange/convert history records) using the specific verb '取得' (get/obtain). While the plural form 'converts' and term 'history' implicitly distinguish it from the singular 'get_convert', it lacks explicit contrast with siblings that would warrant a 5.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'get_convert' (singular) or how it relates to 'submit_convert'. There are no prerequisites, exclusions, or workflow recommendations mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention authentication requirements, rate limits, caching behavior, or whether this endpoint is idempotent. While '取得' suggests a read operation, the description does not explicitly confirm it is safe, public, or describe the data freshness guarantees.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single efficient sentence with no redundant words. It is appropriately front-loaded with the action and resource. However, given the complete absence of annotations, it may be overly terse as it leaves no room for behavioral context that would help an agent understand operational constraints.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the tool has zero parameters (fully covered by schema) and an output schema exists (removing the need to describe return values), the description is minimally adequate. Given the lack of annotations, clear gaps remain regarding authentication scope, rate limiting, and whether the endpoint is public or private. It meets basic requirements but lacks operational context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters, establishing a baseline score of 4 per the evaluation rubric. No parameter semantics are required or provided.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves a list of all available currencies on MAX (a cryptocurrency exchange), using specific verb '取得' (get/retrieve) and resource '貨幣列表' (currency list). It distinguishes from siblings like get_accounts, get_markets, and get_tickers by focusing on the currency catalog rather than user data or market data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It does not indicate whether this should be called before trading to validate currency symbols, or how it relates to get_markets (which likely returns trading pairs). No prerequisites, exclusion criteria, or workflow context is provided.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, what the default sorting is (chronological?), how far back history extends, or rate limiting concerns.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently sized with the main purpose stated first, followed by structured :param documentation. While compact, the docstring format is slightly less scannable than bullet points for LLM parsing, though still functional.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema, the description appropriately omits return value details. However, for a history/pagination endpoint with 4 optional filters, it lacks specifics on valid enum values for 'state', timezone handling, or maximum pagination limits.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage (only titles provided), the description compensates by documenting all four parameters: currency filter, pagination (limit/page defaults), and state filter. It clarifies that empty values return all records. However, it omits valid values for 'state' (e.g., 'completed', 'pending') and currency format expectations.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states '取得存款歷史記錄' (Get deposit history records), providing a specific verb and resource. However, it does not explicitly distinguish this list endpoint from the sibling 'get_deposit' (singular), which likely retrieves a specific record by ID.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'get_deposit' (singular) or 'get_internal_transfers'. It does not mention prerequisites, pagination best practices, or time range limitations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention safety characteristics (read-only vs destructive), authentication requirements, rate limits, or side effects. The only behavioral context provided is the scope limitation to internal transfers.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with a clear one-line purpose statement followed by compact parameter documentation. While the :param: format is slightly unconventional for MCP descriptions, it delivers information density without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (4 optional parameters, output schema exists), the description adequately covers input semantics but lacks behavioral context. Since an output schema is present, the absence of return value documentation is acceptable, though the missing safety/permission details prevent a higher score.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 0% schema description coverage (only titles present), the description excellently compensates by documenting all four parameters inline using Sphinx-style syntax. It explains the currency filter, pagination defaults (50 items/page), and crucially defines the 'side' enum values ('in' for receiving, 'out' for payment) which are absent from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves internal transfer records ('取得平台內部轉帳記錄') and specifies the scope as transfers between MAX platform users ('MAX 用戶間互轉'). This effectively distinguishes it from sibling tools handling external deposits/withdrawals (get_deposits, get_withdrawals) or margin transfers (get_m_transfers).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives like get_m_transfers or how to handle pagination across large result sets. There are no explicit prerequisites or exclusions mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. The verb '取得' (get/obtain) implies a read-only operation, but the description omits pagination behavior, rate limits, authentication requirements, or whether the data is real-time vs. cached. Given the presence of an output schema, the lack of behavioral context is a notable gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately compact with no wasted words. The docstring-style format (:param) is functional if unconventional for MCP descriptions. The one-sentence purpose statement followed by parameter definitions creates a clear, scannable structure.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a retrieval tool with an output schema (handling return value documentation) and only two parameters, the description covers the parameter semantics adequately. However, it lacks behavioral context and high-level usage guidance that would make it complete, leaving it at minimum viable adequacy.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage (only titles), but the description effectively compensates by explaining both parameters and their critical relationship: that they are mutually exclusive identifiers (訂單 ID vs. 自訂訂單 ID) and one must be chosen. This adds essential semantic meaning absent from the structured schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves trade details (成交明細) for a specific order (特定訂單), using a specific verb and resource. It effectively distinguishes from sibling tools like `get_order` (order status vs. trade fills) and `get_trades` (general trades vs. specific order trades), though the null title slightly detracts from perfect clarity.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description documents that `order_id` and `client_oid` are mutually exclusive (擇一), it provides no guidance on when to use this tool versus siblings like `get_trades` or `get_order`, nor does it mention prerequisites such as needing a valid order ID first.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Mentions 'historical' nature implying read-only access to completed data, but lacks disclosure on auth requirements, rate limits, pagination behavior beyond the limit parameter, or data freshness guarantees.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Front-loaded purpose statement followed by RST-style :param: documentation. While functional, the param syntax is slightly awkward for an MCP description field. No redundant sentences, though Chinese-English parameter mixing could be cleaner.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Has output schema (per context signals) so return values need not be described. Parameters are documented in description text. However, lacks differentiation from similar trade-querying siblings and omits operational context like rate limiting or required authentication scope.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Excellent compensation for 0% schema description coverage. Documents all 3 parameters: wallet_type enums ('spot'/'m'), market filtering behavior (empty for all), and limit default (50). Adds substantial meaning missing from the schema titles alone.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action '取得' (get/retrieve) and resource '個人歷史成交記錄' (personal historical transaction records). The qualifier '個人' (personal) distinguishes it from sibling get_public_trades, though it doesn't clarify distinction from get_order_trades.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this versus siblings like get_order_trades or get_public_trades. No prerequisites or conditions mentioned. The term 'historical' implies completed trades but doesn't explicitly direct users away from get_open_orders.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, forcing the description to carry full behavioral disclosure burden, yet it omits safety properties (read-only vs destructive), rate limits, or return value structure. While '取得' implies a read operation, the lack of explicit behavioral traits for financial data creates uncertainty.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely compact with a clear single-sentence purpose followed by concise parameter documentation. The docstring format is slightly less scannable than prose but efficiently conveys the necessary information without waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the presence of an output schema reduces the description's burden for return values, the tool lacks differentiation from its singular sibling 'get_withdrawal' and omits valid filter values for the state parameter, leaving operational gaps for an AI agent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates effectively by documenting all four parameters: currency filtering, pagination defaults (limit/page), and state filtering. However, it fails to specify valid enum values for the 'state' parameter or currency code formats.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves withdrawal history records using the specific verb '取得' (get/retrieve). However, it fails to differentiate from the singular sibling tool 'get_withdrawal', leaving ambiguity about whether this returns a list versus a single record.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided regarding when to use this tool versus the singular 'get_withdrawal' or 'get_withdraw_addresses'. There are no prerequisites, authentication requirements, or pagination strategy recommendations mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but fails to disclose critical behavioral traits: that this deducts funds from the account, whether the operation is reversible, idempotency characteristics, or if it requires confirmation. For a financial mutation tool, this safety information is absent.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Efficiently structured with purpose front-loaded followed by Sphinx-style parameter documentation. While param docs in description text is unconventional for MCP (preferring schema descriptions), it is necessary given the schema's lack of descriptions. No wasted sentences.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a two-parameter tool with an output schema (which handles return value documentation). However, as a financial mutation operation, it lacks important context about side effects, transaction confirmation flow, or account balance requirements that would help an agent use this safely.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Excellent compensation for 0% schema coverage. The description provides concrete examples ('usdt', '1000') and clarifies that amount is a string type (字串), adding semantic meaning entirely missing from the raw schema. Minor deduction for not specifying valid currency options or amount constraints.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action (還款/repayment) and target (M 錢包/M wallet), distinguishing it from siblings like submit_m_loan (borrowing) and submit_m_transfer (transferring). However, it assumes familiarity with what 'M' refers to without explicit definition.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus alternatives, prerequisites (e.g., requiring an active loan), or when repayment might fail. The description lacks operational context for the agent to determine appropriate invocation timing.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, and the description carries the full burden but discloses no behavioral traits. It does not indicate caching behavior, rate limits, authentication requirements, or that this is a read-only operation (though implied by '取得').

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, front-loaded with the verb, no redundant words. Appropriate length for a parameterless retrieval tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the output schema exists to define return values, the description lacks behavioral context that annotations would normally provide (read-only, safe). For a tool with zero parameters, the description is minimally sufficient but could indicate the read-only nature given no annotations exist.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool has zero input parameters, establishing a baseline score of 4. No parameter documentation is required or provided.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action (取得/get) and specific resource (M錢包各幣種指數價格/M wallet currency index prices). It distinguishes from siblings by specifying 'M wallet' context and 'index prices' versus general tickers or loans.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus alternatives like get_ticker, get_tickers, or get_markets. The description only states what the tool does, not when to prefer it over sibling price-retrieval tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a cached or real-time endpoint, if it requires specific permissions, or whether it is idempotent and safe to call repeatedly. The word 'currently' (目前) implies timeliness but lacks specificity on data freshness.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single, efficient sentence with no redundant words. It front-loads the action (取得/get) and target resource immediately, making it appropriately sized for a zero-parameter data retrieval tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the tool has no input parameters and an output schema exists (reducing the description's burden), the complete absence of annotations means the description should ideally disclose behavioral traits like read-only safety or rate limiting. As written, it is minimally viable but leaves operational context gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters. According to scoring rules, tools with no parameters receive a baseline score of 4, as there are no parameter semantics to describe beyond what the empty schema already communicates.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves current lending/borrowing interest rates (借貸利率) for various currencies in the M wallet (M 錢包), providing specific verb and resource identification. However, it does not explicitly differentiate from sibling tools like `get_m_interests` (which likely returns accrued interest records rather than rate schedules).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites such as needing to check rates before calling `submit_m_loan`. It states what the tool does but not the contextual circumstances for its use.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but only documents parameter semantics. It fails to mention that this is a read-only operation, whether the data is real-time or cached, pagination behavior beyond parameter defaults, or any rate limiting considerations for historical data retrieval.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is economically structured with a single purpose statement followed by inline parameter documentation in Python docstring style. There is no redundant text; every line serves either to declare the function or explain a parameter's behavior.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    The description adequately covers the input parameters given the existence of an output schema (which handles return value documentation). However, it lacks domain context about what 'M wallet' refers to and whether this retrieves active loans, closed loans, or both, leaving gaps for an agent trying to understand the full scope of the tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Given 0% schema description coverage (only titles like 'Currency'), the description effectively compensates by documenting all three parameters: currency filtering behavior (including empty-string handling), limit semantics (items per page), and page numbering. It clarifies the default values and their meanings, adding significant value beyond the bare schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'M wallet loan history records' (借貸歷史記錄) using a specific verb-object structure. It distinguishes from sibling 'submit_m_loan' (creation) by specifying this retrieves historical records, though it could further clarify the difference from get_m_repayments or active position queries.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like get_m_limits (for checking limits) or get_m_liquidations (for liquidation history). There is no mention of prerequisites, such as requiring an existing M wallet, or when to prefer pagination versus filtering by currency.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. While '取得' (get/retrieve) implies a read-only operation, there is no information about idempotency, rate limits, data retention periods, or the nature of the repayment records returned despite the existence of an output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is compact and uses structured Sphinx-style param documentation. Every line serves a purpose: one line for the tool's function, three lines for parameter semantics. No redundant or filler text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema (which handles return value documentation) and only three simple parameters, the description covers the basics adequately. However, it lacks context about what 'M wallet' refers to (likely margin) and doesn't mention that this is historical/read-only data, which would be helpful given the 30+ sibling tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage (titles only), the description effectively compensates by documenting all three parameters: currency as a filter that can be empty for all results, limit as items per page, and page as the page number. It also clarifies default values (50, 1) that appear in the schema but gains semantic meaning from the text.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'M 錢包還款歷史記錄' (M wallet repayment history records), providing a specific verb and resource. However, it does not explicitly differentiate from similar sibling tools like get_m_loans or submit_m_repayment in the description text itself.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    There is no guidance on when to use this tool versus alternatives (e.g., get_m_loans for active loans vs. repayment history), nor any mention of prerequisites or conditions. The description only covers parameter usage.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description fails to disclose critical behavioral traits: that this is a state-mutating operation creating financial obligations, that client_oid serves idempotency purposes, rate limits, or fund locking behavior. Only parameter-level constraints are documented.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Uses structured :param format that is scannable and front-loads the purpose statement. Slightly verbose due to repetition of 'param' keyword, but every sentence earns its place by documenting constraints not present in the schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite having an output schema (reducing description burden for returns), the description lacks critical operational context for an 8-parameter financial tool: no mention of order lifecycle, error conditions, or the relationship between this tool and get_open_orders/cancel_order siblings. Parameter docs are thorough but operational context is thin.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Excellent compensation for 0% schema description coverage. The description provides examples (market: 'btcusdt'), enumerated values (side: 'buy'/'sell'), defaults (ord_type defaults to 'limit'), conditional requirements (stop_price required for stop orders), and format constraints (client_oid max 36 chars) that the schema completely lacks.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states '提交買賣訂單' (Submit buy/sell order) with specific verb and resource. It distinguishes from read-only siblings (get_*) and indicates trading functionality via wallet_type parameter ('spot' or 'm'), though it could better differentiate from other submission tools like submit_convert or submit_withdrawal.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this versus alternatives (e.g., submit_convert for currency conversion), prerequisites (sufficient balance), or workflow (create order → check status → potentially cancel). The sibling cancel_order is not mentioned as a reversal mechanism.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It identifies the operation as a withdrawal submission but fails to disclose critical behavioral traits such as irreversibility, immediate fund deduction, potential fees, or required authorization levels.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise with two efficient lines: the first states the purpose, the second documents the parameter. No redundant information is present.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has only one parameter and an output schema exists (relieving the description from detailing return values), the description is minimally adequate. However, the lack of behavioral details and absence of annotations leaves gaps for a financial transaction tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description effectively compensates by providing the semantic meaning ('withdrawal amount'), data type clarification ('string'), and a concrete example ('10000') for the single parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action (submit/request) and resource (Taiwan Dollar/TWD withdrawal). It implicitly distinguishes from the generic 'submit_withdrawal' sibling by specifying the TWD currency, though it doesn't explicitly state when to choose one over the other.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the TWD specificity implies usage context, there is no explicit guidance on prerequisites (e.g., requiring withdrawal addresses), when to use this versus the generic 'submit_withdrawal', or conditions for successful invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden for behavioral disclosure. While '查詢' implies a read operation, the description fails to disclose error handling (what happens if UUID not found), rate limits, or safety characteristics that would help the agent understand operational constraints.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately minimal—one line stating purpose and one line documenting the parameter. No extraneous text is present. However, the Sphinx docstring syntax (:param:) is slightly technical for MCP tool descriptions.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema, the description need not explain return values. With only one required parameter and no complex nested structures, the description meets minimum requirements but lacks context on error states or the relationship between this single record and the list endpoint.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage (only title 'Uuid' provided), the description compensates by documenting the parameter as '存款 UUID' (Deposit UUID), clarifying that this UUID specifically refers to a deposit record. The Sphinx-style formatting is suboptimal but functional.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description '查詢單筆存款詳情' (Query single deposit details) clearly states the verb (query) and resource (single deposit details). It effectively distinguishes from the sibling 'get_deposits' by emphasizing '單筆' (single/individual), though it doesn't explicitly contrast with get_deposit_address.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The term '單筆' (single) implicitly contrasts with the plural 'get_deposits', suggesting this is for individual lookup versus listing. However, there is no explicit guidance on when to use this versus the plural form, error conditions (e.g., invalid UUID), or prerequisites.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden of behavioral disclosure. Fails to indicate this is a read-only operation, lacks pagination behavior explanation beyond parameter names, and omits rate limits or error conditions. Output schema exists but safety profile is undisclosed.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at 4 lines total. Purpose is front-loaded in the first line, followed by param documentation. No redundant text, though the :param docstring format is slightly informal for MCP descriptions.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has an output schema and only 3 simple parameters, the description is minimally adequate. However, for an unannotated financial data retrieval tool, it should disclose read-only nature and pagination patterns to be complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage (only titles), the description fully compensates by documenting all 3 parameters: currency filtering behavior (nullable for 'all'), limit (items per page, default 50), and page (default 1). Adds clear semantic meaning beyond the raw schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description clearly states the action (取得/get) and specific resource (M wallet interest billing history records). While it doesn't explicitly name siblings, the specificity of 'interest billing history' effectively distinguishes it from related tools like get_m_interest_rates or get_m_loans.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives (e.g., when to query specific currencies vs. all), nor mentions prerequisites or conditions. Only parameter syntax is documented.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. The pagination parameters (limit/page) imply list behavior and batching, but the description lacks explicit statements about read-only safety, rate limits, or empty result handling. Output schema exists, mitigating need for return value description.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at three lines. No redundant text, though the Sphinx-style param documentation embedded in the description string is structurally suboptimal (belongs in schema descriptions).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple pagination-based list tool with output schema present, but lacks critical context regarding the sibling 'get_m_liquidation' tool and any authentication/permission requirements given zero annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, but the description compensates effectively by documenting both parameters in docstring format: limit as '每頁筆數' (items per page, default 50) and page as '頁碼' (page number, default 1). This adds necessary semantic meaning absent from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states the action (取得/get) and resource (M wallet liquidation history records). However, it fails to distinguish from the sibling tool 'get_m_liquidation' (singular), leaving ambiguity about whether this returns a list vs. a specific record.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives, particularly the singular 'get_m_liquidation'. No mention of prerequisites, filtering capabilities, or pagination strategy beyond the parameter defaults.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It only implicitly indicates this is a read operation via the term '查詢' (query), but provides no information on error handling (e.g., behavior when order not found), authentication requirements, rate limits, or caching behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately concise with zero redundancy. It front-loads the purpose statement followed by parameter documentation in standard docstring format. However, given the complete lack of schema descriptions, it could benefit from one additional sentence on prerequisites or error behavior.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the existence of an output schema, the description appropriately omits return value details. For a 2-parameter query tool, the description covers the parameters adequately, but gaps remain regarding operational context such as authentication scope, error scenarios, or rate limiting that would help an agent invoke the tool confidently.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 0% schema description coverage, the description effectively compensates by documenting both parameters: order_id is the system order ID, client_oid is the custom client order ID, and they are mutually exclusive. This provides sufficient semantic meaning for an agent to understand parameter usage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description '查詢單筆訂單詳情' (Query single order details) provides a specific verb and resource, clearly distinguishing this from sibling list operations like get_open_orders or get_closed_orders. However, it does not differentiate from get_order_trades, which also retrieves order-related data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit parameter-level guidance documenting that order_id and client_oid are mutually exclusive ('擇一' = choose one). However, it lacks guidance on when to use this tool versus sibling alternatives like get_orders_history or get_open_orders for order retrieval.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. The term 'bound' (已綁定) adds valuable context indicating these are whitelisted/verified addresses, but the description omits output format details, pagination behavior, and whether addresses can expire or be unbound through other operations despite the presence of an output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is compact and front-loaded, consisting of a single clear phrase followed by parameter documentation. Minor deduction for using ':param:' docstring syntax which is slightly unconventional for MCP tool descriptions, though still readable.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter list retrieval tool with an output schema, the description is minimally adequate. It compensates for the parameter's lack of schema documentation but could strengthen completeness by clarifying the relationship to withdrawal submission workflows or addressing pagination limits.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The description effectively compensates for 0% schema description coverage by documenting the currency parameter in the docstring-style note, explaining it filters by currency (e.g., 'btc') and that leaving it empty returns all currencies. This adds essential semantics missing from the schema's bare 'title': 'Currency'.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states it retrieves a list of 'bound withdrawal addresses' (已綁定的提現地址列表), specifying both the resource (withdrawal addresses) and their state (bound/verified). However, it does not explicitly differentiate from the singular 'get_deposit_address' or explain what distinguishes a 'bound' address from a temporary one.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus siblings like 'get_withdrawals' (which retrieves transaction history) or 'get_deposit_address'. It fails to mention prerequisites such as authentication requirements or that this tool is typically used before 'submit_withdrawal' to verify available destination addresses.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, and the description carries the full burden for a destructive operation. It fails to disclose error conditions (e.g., order not found, already filled), idempotency guarantees, or side effects. The term '掛單' (pending order) implies state requirements but doesn't clarify failure modes.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately compact with zero filler. The Sphinx-style :param: documentation is structured and efficient, though slightly more optimized for human developers than LLM parsing. No sentences waste space.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema (per context signals), the description appropriately omits return value details. However, for a financial trading API mutation, it lacks necessary context about order state prerequisites, rate limiting, or validation rules beyond the basic parameter descriptions.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates by documenting the critical XOR relationship: '與 client_oid 擇一' (choose one with client_oid). It explains that order_id is the system ID while client_oid is the custom ID, providing essential semantics missing from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description '取消單筆掛單' (Cancel single pending order) provides a specific verb (cancel) and resource (pending order). The term '單筆' (single) effectively distinguishes this tool from the sibling 'cancel_orders' (plural), clarifying this is for individual cancellation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no explicit guidance on when to select this tool versus 'cancel_orders' or other order management tools. While the parameter documentation implies mutual exclusivity between identifiers, there is no 'when-to-use' or 'when-not-to-use' guidance for tool selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. While '取得' implies a read operation, the description lacks disclosure of error handling (e.g., invalid sn), idempotency, or return value structure despite the existence of an output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise with zero waste—one line stating purpose and one line documenting the parameter. Appropriately front-loaded, though the Python-docstring-style ':param' formatting is slightly less idiomatic than natural language for MCP descriptions.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter retrieval tool with an output schema, the description adequately covers the basics. However, it misses opportunity to mention the relationship to 'get_m_liquidations' for obtaining the serial number, which would help agents understand the workflow.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0% (parameter only has title 'Sn'). The description compensates by documenting ':param sn: 清算序號' (liquidation serial number), clarifying the parameter's semantic meaning and data type context beyond the schema's type information.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description '取得單筆 M 錢包清算詳情' provides a specific verb (取得/Get), resource (M wallet liquidation details), and scope (單筆/single record). It clearly distinguishes from sibling tool 'get_m_liquidations' (plural) by emphasizing this retrieves a single record.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus the plural alternative 'get_m_liquidations', or prerequisites such as obtaining the serial number (sn) from elsewhere. Lacks explicit when-to-use context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It discloses pagination behavior (limit/page parameters) and filtering capabilities, including default values (50, 1). However, it fails to explicitly state this is a read-only operation, mention rate limits, or describe the output structure despite having an output schema available.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately compact with the purpose stated first, followed by parameter documentation. No extraneous information is included. The :param notation is efficient but slightly unconventional for MCP descriptions.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (3 primitive parameters, no nesting) and the presence of an output schema, the description provides sufficient context for invocation. It covers the essential purpose and parameter semantics needed for correct usage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates by documenting all three parameters: currency filtering behavior ('篩選幣種;留空查全部'), limit as items per page ('每頁筆數'), and page as page number ('頁碼'), including default values. This effectively bridges the schema gap, though the :param format is less formal than schema descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves ('取得') M wallet fund transfer records, identifying the specific resource. It distinguishes from sibling 'submit_m_transfer' (get vs. submit) and 'get_internal_transfers' by specifying 'M wallet', though it could further clarify what 'M wallet' means in this context.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus similar alternatives like 'get_internal_transfers' or 'get_deposits'. It lacks prerequisites or conditions for selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses parameter constraints (max 10000 for limit, valid period values) but lacks information on rate limits, authentication requirements, error handling, or data ordering (chronological vs reverse). The output schema handles return format disclosure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is compact and efficiently structured with a one-sentence purpose statement followed by inline parameter documentation. Every line provides essential information with no redundancy or filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 3-parameter data retrieval tool with an output schema present, the description is reasonably complete. It documents all parameters and their constraints. It could be improved by noting the time range covered (recent N periods vs historical range) but this is acceptable given the output schema handles return structure.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Given 0% schema description coverage, the description effectively compensates by documenting all three parameters using :param notation—providing an example for market ('btcusdt'), enumerated valid values for period, and bounds for limit. This adequately covers the semantic gaps in the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states it retrieves market K-line (candlestick) data using the specific term 'K 線(蠟燭圖)', which distinguishes it from siblings like get_ticker or get_trades. However, it doesn't explicitly differentiate when to use this versus other historical data endpoints like get_trades.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus sibling market data tools (get_ticker, get_depth, get_trades, get_public_trades). There are no prerequisites, conditions, or 'when-not-to-use' warnings specified.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a public or authenticated endpoint, whether it is read-only, or any rate limiting concerns. It only states the functional output without behavioral context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single, efficient sentence with no redundant words. It is appropriately front-loaded with the key verb and scope, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the output schema exists (reducing the need to describe return values) and there are no parameters to document, the lack of annotations combined with no mention of authentication requirements or endpoint visibility leaves gaps. It is minimally viable for invocation but incomplete regarding operational context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters. According to the baseline rules, this warrants a baseline score of 4, as there are no parameter semantics to clarify beyond what the schema already indicates (empty object).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description '取得 MAX 所有可用交易市場列表' clearly specifies the action (取得/get) and resource (交易市場列表/trading markets list), and distinguishes itself from siblings like get_currencies (currencies vs markets) and get_ticker (specific price data vs market list discovery).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives, nor does it indicate prerequisites such as authentication requirements or that it should be called before submit_order to discover valid market pairs.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses one key behavioral trait: leaving the parameter empty returns all markets ('留空回傳全部'). However, it lacks disclosure on read-only safety, rate limits for bulk queries, caching behavior, or whether the data is real-time versus delayed.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Highly efficient two-line structure using standard :param documentation format. The first line establishes purpose; the second details the parameter with example. No redundant text, properly front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a single-parameter read operation with output schema available. The description covers the input parameter well but misses opportunity to clarify relationship with 'get_ticker' or warn about performance implications of fetching all markets at once. Sufficient but not comprehensive.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Excellent compensation for 0% schema coverage. The description fully documents the single 'markets' parameter: format (comma-separated), concrete example ('btcusdt,ethusdt'), type (string/null), and default behavior (empty returns all). This provides complete semantic meaning absent from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description clearly states it retrieves real-time quotes ('即時行情') for multiple markets ('多個市場'), with explicit emphasis on batch querying capability ('一次查詢多個'). This effectively distinguishes it from the sibling tool 'get_ticker' (singular). Minor deduction for not specifying what data fields constitute a ticker (e.g., price, volume, change).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit when-to-use guidance or comparison to alternatives. While the batch capability is mentioned ('可一次查詢多個'), there is no guidance on when to prefer this over 'get_ticker' for single queries, or warnings about querying 'all' markets (empty parameter) versus specific ones.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of disclosing behavioral traits. For a financial 'submit' operation, it fails to mention that this is a destructive/write action, whether it is reversible, fee implications, slippage risks, or failure modes (e.g., insufficient liquidity).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is compact and efficiently structured using inline parameter documentation. Every line serves a purpose: the first line establishes the action, and subsequent lines map parameters to semantics and constraints without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a financial transaction tool with an output schema, the description is inadequate. It omits operational context such as fee structures, balance validation, atomicity guarantees, or error conditions. While the output schema exists (reducing the need to describe return values), the description fails to explain the transaction semantics and risks inherent in a currency conversion operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates by documenting all 4 parameters (from_currency as 'sell currency', to_currency as 'buy currency', etc.) and their mutual exclusivity constraint. However, it omits details like format constraints (numeric strings), valid value ranges, or the fact that amounts default to null.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The opening line '執行幣種閃兌(即時換匯)' (Execute currency flash exchange/instant forex) provides a specific verb and resource. It clearly distinguishes this from siblings like get_convert (read status) and submit_order (standard trading) by specifying the 'flash exchange' mechanism.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description documents the critical constraint that from_amount and to_amount are mutually exclusive ('二擇一' - choose one of two), which guides correct invocation. However, it lacks explicit guidance on when to use this instant conversion versus submit_order for standard trading, or prerequisites like balance requirements.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to specify whether the tool generates a new address or retrieves an existing one, its idempotency, or cache behavior. It only minimally describes the input parameter with examples.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately brief for a single-parameter tool, with the purpose front-loaded in the first line. The Sphinx-style parameter documentation is slightly technical but efficiently conveys the necessary information without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While adequate for a simple retrieval operation and sufficient given the existence of an output schema, the description omits important context such as whether the address is user-specific, if it's safe to call repeatedly, and how it relates to deposit history retrieval tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 0% description coverage. The description compensates fully by explaining `currency_version` as '幣種版本' and providing concrete examples ('eth', 'trc20usdt', 'erc20usdt'), clarifying the expected format and valid network identifiers.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves a deposit address ('取得指定幣種的充值地址') with a specific verb and resource. It effectively distinguishes from siblings like `get_deposit` (likely history) and `get_withdraw_addresses` through the explicit '充值' (deposit) terminology.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus similar alternatives like `get_deposit` or `get_deposits`, nor does it mention prerequisites such as authentication requirements or valid currency states.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return limit constraint (最多 1000) and filtering behavior, but lacks information on authentication requirements, rate limits, pagination behavior, or cache characteristics. The term '取得' (get/fetch) implies read-only access, but this is not explicitly stated.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description uses efficient Sphinx-style :param: documentation that is appropriately front-loaded. Every sentence serves a purpose—one line for the core function, followed by structured parameter definitions. No redundant or wasteful text is present.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 3-parameter listing tool with an output schema (which obviates the need to describe return values), the description is quite complete. All parameters are documented and the scope (unexecuted orders only) is clear. The only gap is the lack of differentiation from the numerous sibling order-related tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Given 0% schema description coverage, the description excellently compensates by documenting all three parameters with complete semantics: wallet_type clarifies valid values ('spot' or 'm') and default, market provides an example ('btcusdt') and null behavior, and limit specifies bounds (max 1000) and default (50).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves '目前未成交的掛單列表' (currently unexecuted/open orders list), providing a specific verb and resource. However, it does not distinguish from siblings like get_closed_orders, get_order, or get_orders_history, which could help the agent select the correct tool for order state filtering.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives such as get_closed_orders (for filled/cancelled orders) or get_order (for specific order lookup). No prerequisites or exclusion criteria are mentioned, leaving the agent without selection context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It successfully communicates the authentication requirements (public access, no login needed) but omits other behavioral traits such as rate limits, data freshness windows, or pagination behavior beyond the limit parameter.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is optimally concise with three efficient lines: one for purpose/scoping, two for parameter documentation. The Sphinx-style :param format provides clear structure without verbosity, and every line conveys critical information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the simple two-parameter structure and the existence of an output schema (not shown but indicated), the description is reasonably complete. It covers the authentication boundary and parameter semantics. A minor gap remains in explicitly differentiating from 'get_trades' regarding private vs. public data access.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, but the description fully compensates using :param syntax. It documents both parameters: 'market' includes an example ('btcusdt'), and 'limit' includes the default value (30). This adds essential semantic meaning missing from the bare schema titles.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves recent public trade records for a market, using specific verb (取得/get) and resource (公開成交記錄/public trade records). It distinguishes from sibling 'get_trades' by emphasizing 'public' and 'no login required' (無需登入), though it could more explicitly contrast with authenticated alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description includes 'no login required' (無需登入), which implicitly guides usage toward public/unauthenticated scenarios. However, it lacks explicit when-to-use guidance comparing it against 'get_trades' or 'get_order_trades', and does not mention prerequisites like valid market IDs.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It adds valuable temporal context by specifying 'currently available' (目前可用), implying real-time data. However, it lacks other behavioral disclosures such as whether the operation is read-only, idempotent, or requires specific permissions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient phrase with no redundant words. It is appropriately front-loaded with the action verb and maintains focus solely on the tool's purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the low complexity (zero parameters) and the presence of an output schema to define return values, the description is sufficiently complete. It identifies the specific financial instrument (M wallet lending limit) clearly, though it could benefit from explicitly noting it is a safe read operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool accepts zero parameters, which triggers the baseline score of 4. The input schema is empty and correctly reflects this, requiring no additional semantic clarification from the description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb (取得/get) and clearly identifies the resource as the M wallet's current available lending limit (M 錢包目前可用借貸額度上限). It effectively distinguishes from siblings like get_m_loans (which retrieves existing loans) by focusing on the credit limit/capacity rather than active debt.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description states what the tool retrieves but provides no guidance on when to use it versus alternatives. It does not indicate that this should be checked before using submit_m_loan, nor does it contrast with get_m_ad_ratio or other M-wallet balance tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses pagination behavior ('分頁'), ordering logic ('依 ID 順序'), and cursor-based navigation ('from_id' for翻頁). However, it omits safety/permission disclosures (read-only status, rate limits) that would be essential for a data retrieval tool without annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with a one-line purpose statement followed by standard :param documentation. While the param lines add length, they are necessary given the schema's lack of descriptions. No extraneous information is present.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With an output schema present, the description appropriately omits return value details. It covers all input parameters and explains pagination strategy. The only gap is the lack of explicit differentiation from sibling order query tools (get_open_orders, get_closed_orders) in a crowded API surface.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Given 0% schema description coverage (only titles present), the description fully compensates by documenting all 4 parameters with clear semantics: wallet_type explains the 'spot'/'m' enum values and default, market describes filtering behavior, from_id clarifies pagination usage, and limit specifies the return count default.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool '依 ID 順序分頁取得訂單歷史' (paginates to get order history by ID order), providing a specific verb and resource. It adds context that it is '適合完整匯出' (suitable for complete export), but does not explicitly distinguish from sibling tools like get_closed_orders or get_open_orders.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides implied usage guidance by noting it is suitable for '完整匯出' (complete export), suggesting when to use it. However, it lacks explicit when-not-to-use guidance or named alternatives among the many sibling order-retrieval tools (get_open_orders, get_closed_orders, etc.).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Without annotations, the description carries the burden of behavioral disclosure. It explains filtering logic (empty values query all) and pagination defaults, but fails to indicate this is a read-only operation, mention rate limits, or describe auth requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with the purpose statement first, followed by param documentation in docstring format. No redundant text; every line provides actionable information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the simple 4-parameter schema and existence of an output schema, the description adequately covers inputs. However, gaps remain in usage context and safety disclosure (read-only status) that should be addressed when no annotations are present.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fully compensates by documenting all 4 parameters: reward_type and currency explain filtering behavior with null handling, while limit and page specify pagination defaults (50 and 1 respectively).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves reward records (取得獎勵記錄) and provides specific examples (rebates, airdrops, interest rewards) that define the scope, distinguishing it from sibling tools like get_deposits or get_trades.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives (e.g., get_trades for trading history or get_deposits for deposit history). There are no exclusions, prerequisites, or 'see also' references to related tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It specifies key return fields (KYC status, account ID) which helps, but omits safety characteristics (idempotency, rate limits), cache behavior, or error conditions. For a read-only getter with no parameters, mentioning the specific data returned provides minimal viable transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with information-dense parenthetical. No filler words. The structure immediately identifies the action, subject, and specific data points returned, achieving maximum clarity with minimum length.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has zero parameters and an output schema exists, the description adequately covers the essential functionality by listing representative output fields. However, it could be improved by explicitly mentioning authentication/session requirements or rate limit considerations for a user-data endpoint.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has zero parameters with 100% coverage. Per scoring rules, zero-parameter tools receive a baseline score of 4. The description appropriately does not fabricate parameter semantics where none exist.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb '取得' (get/retrieve) with clear resource '用戶資訊' (user information) and scope '目前登入帳戶' (currently logged-in account). The parenthetical examples (KYC status, account ID) precisely distinguish this from sibling 'get_accounts' which likely returns trading account data rather than identity/KYC profile data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use versus siblings like 'get_accounts', nor any prerequisites mentioned. While 'currently logged-in' implies session-based authentication is required, the description does not state when this tool is preferred over alternatives or any conditions that would make it unavailable.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses behavioral specifics like the maximum limit of 1000 records and enumerates valid state values (done/cancel/finalizing/failed). However, it omits whether the operation is read-only, pagination behavior, sorting order, or rate limiting concerns.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The docstring format is efficiently structured with the purpose statement front-loaded, followed by structured parameter documentation. Every line provides necessary information without redundancy or filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema and the parameter documentation in the description, the definition is nearly complete. It could be improved by explicitly stating this is a safe read-only operation (given the 'get' prefix) and mentioning pagination or sorting behavior, but the core contract is well-defined.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 0% schema description coverage, the description comprehensively documents all four parameters using :param directives. It specifies data types (spot/m for wallet_type), default values (spot, 50), constraints (max 1000), and enumerated valid states (done/cancel/finalizing/failed), fully compensating for the schema's lack of metadata.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves closed orders (取得已關閉的訂單) and immediately clarifies the scope includes filled, cancelled, and other terminal states (已成交、已取消等). This effectively distinguishes it from sibling tools like get_open_orders and get_order.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no explicit guidance on when to use this tool versus alternatives like get_open_orders, get_order, or get_orders_history. It does not mention prerequisites, authorization requirements, or conditions where this query would be preferred over others.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses critical behavioral context: the ratio calculation methodology (assets divided by loans) and the business logic significance (liquidation trigger when below threshold). It does not mention auth or rate limits, but the liquidation risk disclosure is the most essential behavioral trait for this financial tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with two information-dense clauses: the retrieval action with formula clarification, and the liquidation consequence. No filler words or redundant explanations.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has zero parameters and an output schema exists (per context signals), the description appropriately focuses on explaining what the metric represents and its significance (liquidation risk) rather than return values. It provides sufficient context for an agent to understand when this specific margin metric is relevant.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters (empty object). Per evaluation rules, 0 parameters receives a baseline score of 4, as there are no parameter semantics to describe beyond the schema structure.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states it retrieves the current AD ratio (資產/借貸 = assets/loans) for the M wallet, using a specific verb (取得/get) and resource. It distinguishes from siblings like get_m_loans or get_m_liquidation by focusing on the ratio calculation rather than the underlying loan records or liquidation events.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The mention of liquidation thresholds (低於門檻會觸發清算) implies the tool is used for risk monitoring, but lacks explicit when-to-use guidance or named alternatives (e.g., 'use this before submitting a loan to check risk vs using get_m_limits').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully explains the directional semantics of the transfer (into vs out of M wallet), but fails to disclose critical behavioral traits: whether the operation is idempotent, requires sufficient balance, is irreversible, or carries financial risk. Given this is a financial mutation tool, this is a significant gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately concise with a clear one-line summary followed by structured :param documentation. While the parameter docs are efficiently packed with examples, the inline code-documentation style slightly reduces readability compared to natural language prose, though it remains highly functional.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the existence of an output schema (not shown but indicated), the description appropriately avoids duplicating return value documentation. However, for a 3-parameter financial mutation tool with zero annotations, the description should include prerequisites (sufficient balance) or safety warnings (irreversible transaction) to be considered complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage (only titles provided), the description fully compensates by documenting all three parameters with clear semantics and concrete examples: currency ('usdt'), amount format (string '500'), and side enum values ('in'/'out' with directional meanings). This provides exactly the information the schema lacks.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action (transfer funds) and the exact resources involved (spot wallet and M wallet). It effectively distinguishes from siblings like 'submit_withdrawal' (external transfers) and 'submit_m_loan' (borrowing) by specifying this is a balance transfer between internal wallets.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The parameter documentation for 'side' ('in' vs 'out') implies the usage pattern for transferring in each direction. However, there is no explicit guidance on when to use this tool versus alternatives like 'submit_m_loan' for borrowing needs, or workflow prerequisites like checking balances first.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention critical traits: irreversibility of withdrawals, fee structures, confirmation requirements, or processing delays. It only notes this is an 'application' (申請), hinting at a request workflow, but lacks safety warnings essential for a financial mutation tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The :param docstring format efficiently organizes information across three lines with zero redundancy. The purpose statement is front-loaded, followed by structured parameter details. The format is technical but appropriate for the content density.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 2-parameter tool with an output schema, the description adequately covers input requirements. However, given the high-stakes nature of cryptocurrency withdrawals, the description is incomplete—it omits currency/asset specification methodology (likely implicit in the address UUID), fee disclosures, and minimum amount constraints that would aid agent decision-making.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 0% description coverage (only titles). The description fully compensates by providing clear semantics for both parameters: 'withdraw_address_uuid' includes prerequisite instructions to use get_withdraw_addresses, and 'amount' specifies the string format with a concrete example ('100').

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action '提交' (submit) and resource '加密貨幣提現申請' (cryptocurrency withdrawal application). It effectively distinguishes itself from the sibling tool 'submit_twd_withdrawal' by explicitly specifying 'cryptocurrency' (加密貨幣) versus fiat currency.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit prerequisite guidance embedded in the parameter documentation: '先用 get_withdraw_addresses 取得' (first use get_withdraw_addresses to obtain). This correctly sequences the workflow. However, it lacks explicit contrast with 'submit_twd_withdrawal' regarding when to use crypto vs fiat withdrawal.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It successfully indicates the return structure (bid/ask prices and quantities) and mentions the 300-item limit, but omits operational details like rate limits, data freshness/real-time nature, or authentication requirements that would help an agent understand the tool's reliability and constraints.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is compact and front-loaded: one sentence for purpose, followed by two :param lines documenting inputs. Every element earns its place with no redundancy or filler text, appropriate for a simple two-parameter tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given that an output schema exists (per context signals), the description appropriately avoids redundant return value documentation. The combination of clear purpose statement and complete inline parameter documentation provides sufficient context for this read-only market data tool, though mentioning whether authentication is required would achieve full completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fully compensates by documenting both parameters inline. It provides an example ('btcusdt') for the market parameter and specifies constraints (max 300) plus default value (20) for the limit parameter, giving agents complete semantic context missing from the structured schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves market depth (order book/掛單簿) and displays bid/ask prices and quantities. This specific verb+resource combination effectively distinguishes it from siblings like get_ticker (price statistics), get_public_trades (executed trades), and get_open_orders (user's personal orders).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description implies this is a read-only market data tool (evident from '取得/get' and the return value description), it lacks explicit guidance on when to use this versus alternatives like get_ticker for price snapshots or get_public_trades for recent execution history. No prerequisites or exclusions are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses specific behavioral aspects by listing the exact data fields returned (latest price, bid/ask prices, 24h volume) and noting '即時' (real-time). However, it omits mention of read-only safety or rate limiting.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Uses efficient docstring format with no waste. The two-line structure (purpose + param) front-loads the core action and immediately follows with the required parameter specification.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the output schema exists (per context signals), the description appropriately does not detail return structures. For a single-parameter tool, it covers the essential behavioral context (real-time data retrieval) and parameter semantics completely.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 0% schema description coverage, the description fully compensates by defining the parameter as '市場 ID' (Market ID) and providing concrete examples ('btcusdt', 'ethusdt'), giving the agent clear semantic meaning and expected format.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description states '取得單一市場即時行情' (get real-time quotes for a single market) with specific data points (latest price, bid/ask, 24h volume). The term '單一市場' (single market) clearly distinguishes this from the sibling tool 'get_tickers' (plural).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While '單一市場' implies usage for individual market queries vs bulk operations, there is no explicit guidance on when to use this versus 'get_tickers', 'get_depth', or 'get_markets'. No prerequisites or exclusions are stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses the critical behavioral detail that the timestamp is in seconds (not milliseconds), which is essential for correct usage. However, it omits details about caching behavior or rate limiting.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste. It front-loads the action and precisely specifies the output format without extraneous words.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a zero-parameter utility tool with an output schema present, the description is complete. It adequately describes what the tool returns (Unix timestamp), making it sufficient for agent selection and invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool has zero input parameters, which per guidelines establishes a baseline of 4. The empty schema is fully described (100% coverage), and no parameter semantics are needed in the description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb (取得/get) and clearly identifies the resource (MAX server current time) and return format (Unix timestamp in seconds). It clearly distinguishes this utility function from the numerous trading-related siblings (submit_order, cancel_order, etc.).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the purpose is clear, the description provides no explicit guidance on when to use this tool versus alternatives, or why one might need server time (e.g., for request timestamp synchronization). Usage is implied but not stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

max-mcp MCP server

Copy to your README.md:

Score Badge

max-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nicky512500/max-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server