blockscout-mcp
Server Details
Blockscout - 56 tools for data, metrics, and on-chain analytics
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/blockscout-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
56 toolscelestia_service_get_blobCInspect
GET /api/v1/celestia/blob Returns: { height: string, namespace: string, commitment: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| height | No | query parameter: height (string) | |
| skipData | No | query parameter: skipData (boolean) | |
| commitment | No | query parameter: commitment (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but only partially discloses the return structure. It fails to mention error conditions (e.g., blob not found), authentication requirements, rate limits, or the implications of the 'skipData' parameter on the response payload.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact (single sentence fragment) but poorly structured for an AI agent, mixing HTTP method notation, path, and return type syntax into a dense technical string without clear separation or prioritization of the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the specialized domain (Celestia data availability), zero annotations, and lack of output schema, the description should explain core concepts (blobs, commitments, namespaces). It provides only endpoint mechanics, leaving significant gaps in conceptual understanding needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with mechanical descriptions ('query parameter: height (string)'), so the baseline is 3. The description adds no additional semantic context about what 'commitment' or 'namespace' represent, or how 'skipData' affects the returned blob data.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description indicates retrieval via 'GET /api/v1/celestia/blob' and mentions the return structure, but it fails to explain what a 'blob' represents in the Celestia context or distinguish its use from the sibling 'celestia_service_get_l2_batch_metadata'. It relies on technical endpoint notation rather than natural language purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, prerequisites (such as knowing the commitment hash), or when to use the 'skipData' flag. The description only documents the HTTP endpoint and return type.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
celestia_service_get_l2_batch_metadataCInspect
[DISCOVERY] GET /api/v1/celestia/l2BatchMetadata Returns: { l2ChainId: number, l2BatchId: string, l2StartBlock: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| height | No | query parameter: height (string) | |
| namespace | No | query parameter: namespace (string) | |
| commitment | No | query parameter: commitment (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the HTTP method (GET, implying read-only) and partially documents return fields (l2ChainId, l2BatchId, l2StartBlock), which helps. However, it omits error handling, authentication requirements, rate limits, and the significance of the optional parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but poorly structured. The '[DISCOVERY]' prefix appears to be metadata noise. Cramming the return JSON into the description line reduces readability. It is front-loaded with the HTTP path, but the return value documentation would benefit from clearer separation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The inline return structure partially compensates for the lack of a formal output schema. However, for a domain-specific tool (Celestia L2 batches), the description lacks conceptual context about what data availability batches are, leaving agents without blockchain expertise underspecified despite the simple 3-parameter input schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('query parameter: height (string)', etc.), the baseline is 3. The description adds no parameter-specific context (e.g., what 'namespace' or 'commitment' refer to in Celestia's data availability layer), so it meets but does not exceed the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the resource (L2 batch metadata) and HTTP endpoint, but provides only mechanical API details without explaining what 'L2 batch metadata' means in the Celestia context or how it differs from the sibling tool 'celestia_service_get_blob'. It restates the tool name without semantic elaboration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives, nor prerequisites for the query parameters (height, namespace, commitment). The description fails to explain what constitutes a valid query or when results might be empty.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_abstraction_statusBInspect
[DISCOVERY] get account abstraction indexing status Returns: { finished_past_indexing: boolean, v06: { enabled: boolean, live: boolean, past_db_logs_indexing_finished: boolean, past_rpc_logs_indexing_finished: boolean }, v07: { enabled: boolean, live: boolean, past_db_logs_indexing_finished: boolean, past_rpc_logs_indexing_finished: boolean } }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It documents the return structure (v06/v07 objects with boolean flags) which hints at read-only behavior, but lacks context on what these versions represent, potential errors, or operational costs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence structure is efficient, but the '[DISCOVERY]' prefix appears to be metadata noise. The embedded JSON return structure is verbose but necessary compensation for the lack of formal output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by documenting return fields (finished_past_indexing, v06/v07 statuses). However, lacks domain context explaining what account abstraction means or what the v06/v07 versions refer to (likely EIP-4337 versions).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing baseline score per rules. No parameter description needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('get') and resource ('account abstraction indexing status'), distinguishing it from the sibling 'get_indexing_status' by specifying the account abstraction domain. Includes detailed return structure inline.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the sibling 'get_indexing_status', nor any prerequisites or conditions. The '[DISCOVERY]' prefix suggests a use case but is not explained.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_addressCInspect
get address info Returns: { creator_address_hash: string, creation_transaction_hash: string, token: { circulating_market_cap: string, icon_url: string, name: string, decimals: string, symbol: string, address_hash: string, ... }, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It provides valuable return value structure (creator info, token object fields) which implies read-only behavior, but omits error handling, null value behavior, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description awkwardly concatenates the action phrase with return value documentation ('get address info Returns: {...}') without proper punctuation or structure. While information-dense, the layout could be clearer with separation of purpose and return specification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description partially compensates for the lack of output schema by documenting return fields. However, given the extensive sibling toolset, the failure to clarify this tool's scope relative to specialized alternatives leaves a significant gap in contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with 'address_hash' described as 'Address hash'. The description adds no parameter-specific semantics (format, examples, validation), but the schema is self-sufficient, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'address info' but remains vague about what distinguishes it from sibling tools like get_address_counters or get_address_txs. The embedded return value hints at specific data (creator_address_hash, token details), but the primary purpose statement lacks specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this general 'get_address' tool versus the numerous specific address-related siblings (e.g., get_address_blocks_validated, get_address_token_balances) or the plural 'get_addresses'. Users cannot determine if this is the correct entry point.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_blocks_validatedCInspect
get blocks validated by address Returns: { items: { base_fee_per_gas: string, burnt_fees: string, burnt_fees_percentage: number, difficulty: string, extra_data: string, gas_limit: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It compensates for the lack of output schema by inline-documenting the return structure (items array with base_fee_per_gas, burnt_fees, etc.), which is valuable. However, it omits behavioral details like pagination mechanics (despite next_page_params appearing in the return), read-only safety, or what 'validated' implies in this blockchain context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description awkwardly concatenates the purpose statement with a JSON return-type dump ('Returns: { ... }'). This makes it verbose and poorly structured; return values belong in an output schema field, not crammed into the description text where it harms readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the inline return schema partially compensates for missing structured output documentation, the description lacks critical context for a query tool among 50+ siblings: it does not specify whether results are paginated, sorted, or time-bounded, nor does it clarify the relationship to `get_blocks` or validator-specific queries.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (address_hash is described as 'Address hash'). The description does not add semantic meaning beyond the schema, but with complete schema coverage, this meets the baseline expectation without penalty.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'blocks validated by address', which conveys the basic intent, but uses informal lowercase phrasing. It fails to distinguish this from sibling tools like `get_blocks` or `get_address`, leaving ambiguity about when to query for a specific validator versus general block retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like `get_blocks` or `get_address_txs`. Given the extensive list of address-related siblings (get_address_coin_balance_history, get_address_internal_txs, etc.), the absence of selection criteria forces the agent to guess based on naming alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_coin_balance_historyCInspect
get address coin balance history Returns: { items: { transaction_hash: string, block_number: number, block_timestamp: string, delta: string, value: string }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the response structure (array of balance change records with delta/value fields) which substitutes for missing output schema, but omits operational details: whether results are ordered (chronologically?), how next_page_params functions, what 'delta' represents (absolute change?), and whether the operation is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but poorly structured: it leads with redundant tool name rather than front-loading the value (the return structure). The return type documentation is valuable and earns its place, but the opening tautology wastes the first impression. Not verbose, yet inefficiently organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read operation, the description achieves minimum viability by documenting the return payload structure inline (compensating for lack of output schema). However, gaps remain regarding pagination mechanics and the critical distinction from the '_by_day' variant. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with 'address_hash' documented as 'Address hash'. With high schema coverage, baseline is 3. The description adds no additional parameter context (e.g., expected format like 0x-prefixed hex, case sensitivity), but the schema is self-sufficient for a single-parameter tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'address coin balance history' and documents the return structure (items with transaction_hash, delta, value, etc.), which clarifies the granularity. However, it begins with tautology (repeating the tool name) and fails to distinguish from sibling 'get_address_coin_balance_history_by_day', leaving ambiguity about which time granularity to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. Critically missing: differentiation from 'get_address_coin_balance_history_by_day' (likely daily aggregation) versus this tool's transaction-level history. No mention of prerequisites like address format validation or pagination handling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_coin_balance_history_by_dayDInspect
get address coin balance history by day
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, yet the description discloses no behavioral traits. It omits: default time ranges, how far back history extends, pagination behavior, rate limiting, or whether this operation is computationally expensive. For a history retrieval tool, this lack of behavioral context is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief at only seven words, the description suffers from under-specification rather than efficient conciseness. The single sentence fails to earn its place by not conveying any information beyond the tool name itself.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a history-by-day tool with no output schema, the description is inadequate. It fails to specify return value structure, time range defaults, or aggregation methodology, leaving critical usage context undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for its single parameter ('address_hash'). The description adds no additional semantic information about the parameter format or validation rules, meeting the baseline expectation when the schema is already self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'get address coin balance history by day' is essentially a tautology of the tool name, converting camelCase to spaced words. While it indicates the resource and daily aggregation, it fails to distinguish from the sibling tool 'get_address_coin_balance_history' (without '_by_day'), leaving ambiguity about which granularity to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this daily aggregation versus the non-aggregated history endpoint, nor any mention of time range limits or prerequisites. The description offers no 'when-to-use' or 'when-not-to-use' signals.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_countersCInspect
get address counters Returns: { transactions_count: string, token_transfers_count: string, gas_usage_count: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden of behavioral disclosure but only describes the return payload structure, omitting safety characteristics, authentication requirements, or rate limiting. While documenting the return shape compensates for the lack of an output schema, the description fails to disclose whether the operation is read-only, idempotent, or has side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and fits in a single line, but suffers from awkward formatting including a lowercase start and missing punctuation before 'Returns.' The initial tautology wastes space, though the return value documentation efficiently conveys output structure without unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool, the description is minimally adequate because it documents the return value structure to compensate for the missing output schema. However, it lacks information about error cases, edge handling for invalid addresses, or data freshness that would be necessary for a complete specification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single address_hash parameter, establishing a baseline score per the rubric. The description adds no additional semantic context about the parameter format, validation rules, or example values beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description lists specific counter fields returned (transactions_count, token_transfers_count, gas_usage_count), clarifying what data is retrieved beyond the tool name. While it begins with a tautological phrase 'get address counters,' the specific field enumeration helps distinguish this from sibling tools like get_address or get_address_txs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as get_address or get_address_txs. There are no stated prerequisites, constraints, or conditions that would help an agent select this tool correctly from the numerous address-related siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_addressesBInspect
[DISCOVERY] get native coin holders list Returns: { exchange_rate: string, total_supply: string, items: { creator_address_hash: unknown, creation_transaction_hash: unknown, token: unknown, coin_balance: unknown, exchange_rate: unknown, implementation_address: unknown, ... }[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It documents the return structure inline (including fields like exchange_rate, total_supply, and items array), which helps. However, it lacks critical operational details: pagination behavior, result limits, sorting order, or whether this is an expensive/slow query. The '...' truncation also leaves the return schema incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively short but awkwardly structured. The '[DISCOVERY]' prefix appears to be metadata leakage rather than helpful documentation. The return type is crammed into the same sentence as the description, making it dense and harder to parse than necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and structured output schema, the description partially compensates by sketching the return object structure. However, for a list endpoint, the absence of pagination details, rate limiting notes, or data freshness indicators leaves significant gaps in contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which establishes a baseline of 4. The description correctly does not invent parameters, maintaining consistency with the empty schema. No additional parameter context was needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves a 'native coin holders list', distinguishing it from sibling tools like `get_token_holders` (for ERC-20 tokens) and `get_address` (singular). However, it doesn't clarify the scope (e.g., top holders vs. all holders) or what constitutes the 'native coin' in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this versus alternatives like `get_token_holders` or `get_smart_contracts`. The description lacks prerequisites, filtering capabilities, or explicit comparisons to sibling tools that return address-related data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_internal_txsCInspect
get address internal transactions Returns: { items: { block_number: number, created_contract: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, error: string, from: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, gas_limit: string, index: number, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| filter | No | query parameter: filter (string) | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but only dumps a raw return structure without explaining pagination behavior (despite showing next_page_params), rate limits, or whether this is a read-only operation. It fails to explain domain-specific behavior of internal transactions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one run-on sentence containing a messy JSON blob of return types. It lacks paragraph breaks or formatting, making it difficult to parse. The return structure dump is excessive for a description field.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of blockchain domain (internal transactions vs regular transactions) and numerous sibling tools, the description is incomplete. It lacks domain context, pagination explanations, and differentiation from related address/transaction tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (address_hash and filter are both documented). The description adds no additional parameter context, meeting the baseline for high schema coverage scenarios.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'get address internal transactions' which restates the tool name (tautology). While it lists return fields, it fails to define what 'internal transactions' are or distinguish this tool from siblings like get_address_txs, get_internal_transactions, or get_transaction_internal_txs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. The description does not explain the 'filter' parameter's purpose (to/from) in context, nor does it indicate prerequisites like address format or network constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_logsCInspect
get address logs Returns: { items: { address_hash: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, block_hash: string, block_number: number, data: string, decoded: { method_call: unknown, method_id: unknown, parameters: unknown }, index: number, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It includes a raw return structure dump showing pagination (next_page_params) and data shapes, but lacks explanation of behavioral traits like rate limiting, data retention, or whether this includes logs where the address is the emitter vs target.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While not verbose, the description is poorly structured as a raw JSON-like type dump. The 'Returns:' section is difficult to parse and front-loads technical schema details before establishing what the tool actually does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of output schema information in the description (since no formal output_schema exists), it partially compensates by showing return structure. However, it fails to differentiate from numerous sibling tools or explain blockchain-specific log concepts necessary for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with the address_hash parameter already documented as 'Address hash'. The description adds no additional semantic meaning about the format (hex? checksum?) or what address to provide, which is acceptable given the schema coverage baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'get address logs' which tautologically restates the tool name. While it includes a technical dump of the return structure, it never explains what 'address logs' actually are (event logs? system logs?) or how they differ from transactions or internal transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings like get_transaction_logs, get_address_txs, or get_address_internal_txs. Given the nuanced differences between these blockchain data types, this omission is critical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_nftCInspect
get list of NFT owned by address Returns: { items: { is_unique: boolean, id: string, holder_address_hash: string, image_url: string, animation_url: string, external_app_url: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | query parameter: type (string) | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and partially satisfies this by documenting the return structure including the `items` array and `next_page_params` pagination object. However, it lacks details on rate limits, authentication requirements, or explicit confirmation that this is a read-only operation, though 'get' implies a safe read.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single run-on fragment that awkwardly concatenates the purpose statement ('get list of NFT owned by address') with the return type documentation ('Returns: {...}'), making it difficult to parse. While brief, it suffers from poor structure and readability rather than efficient information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description compensates for the missing output schema by detailing the return structure including fields like `is_unique`, `image_url`, and `next_page_params`, which is essential for an agent to understand the response. However, it lacks guidance on how to use `next_page_params` for subsequent requests and does not clarify relationships to similar address-based NFT tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with `address_hash` and `type` parameters already documented, including examples like 'ERC-721,ERC-404,ERC-1155' for the type parameter. Since the schema fully defines the parameters, the description does not need to add semantics, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a 'list of NFT owned by address' using the specific verb 'get' and identifying the resource as NFTs scoped to an address. However, it fails to differentiate from sibling tools like `get_address_nft_collections` or `get_nft_instance`, which could confuse an agent about which endpoint to use for collection-level versus individual NFT data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as `get_address_nft_collections` or `get_nft_instance`. It also omits instructions on how to handle pagination using the `next_page_params` field mentioned in the returns, leaving agents to infer the pagination pattern.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_nft_collectionsBInspect
get list of NFT owned by address, grouped by collection Returns: { items: { token: { circulating_market_cap: unknown, icon_url: unknown, name: unknown, decimals: unknown, symbol: unknown, address_hash: unknown, ... }, amount: string, token_instances: unknown[] }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | query parameter: type (string) | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It compensates by including a detailed 'Returns:' section describing the output structure including pagination hints (next_page_params), but lacks information on rate limits, authentication requirements, or error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the action but suffers from poor punctuation (missing period before 'Returns:') and a dense, technical JSON dump for the return type. While informative, the structure is cramped and the return type annotation is cluttered with 'unknown' types.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter input schema and lack of formal output schema, the description adequately compensates by documenting the return structure. However, it omits pagination behavior details, error scenarios, and sibling tool differentiation that would make it complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the 2 parameters (address_hash and type), establishing a baseline of 3. The description adds no additional parameter semantics beyond what the schema provides, such as explaining the ERC standard types or address hash format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves a list of NFTs owned by an address, grouped by collection, using specific verbs and resources. However, it doesn't explicitly distinguish itself from the sibling tool 'get_address_nft' (which likely returns ungrouped results), preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_address_nft' or 'get_address_tokens'. It mentions 'grouped by collection' but doesn't explain the use case for grouping versus flat lists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_token_balancesCInspect
get all tokens balances for the address
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden, yet it omits key behavioral details: whether balances include zero values, response format/pagination, token standards covered, or error handling for invalid addresses.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief (7 words), the description is overly minimal rather than efficiently informative. It lacks front-loaded value about scope or behavior, presenting only a bare restatement of the tool's name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single string parameter) and lack of output schema, the description is minimally adequate but should specify whether it covers ERC-20 tokens only, excludes NFTs (given sibling 'get_address_nft'), and whether native coin balances are included.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured documentation already explains the 'address_hash' parameter. The description adds no additional semantic value (e.g., address format expectations, chain specificity), warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action (get token balances) and scope (all tokens for an address), but fails to distinguish from similar sibling tools like 'get_address_tokens' (which may return token holdings without balances) or clarify what token types are included (ERC-20 vs NFTs).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives like 'get_address_coin_balance_history' or 'get_address_tokens'. Does not mention prerequisites like address format or network context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_tokensCInspect
[DISCOVERY] token balances with filtering and pagination Returns: { items: { token_instance: { is_unique: unknown, id: unknown, holder_address_hash: unknown, image_url: unknown, animation_url: unknown, external_app_url: unknown, ... }, value: string, token_id: string, token: { name: unknown, decimals: unknown, symbol: unknown, address_hash: unknown, type: unknown, holders_count: unknown, ... } }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | query parameter: type (string) | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions pagination and filtering behaviors and includes inline return structure documentation, but lacks details on rate limits, auth requirements, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loads the core purpose but appends a raw, unformatted JSON blob of return fields that severely clutters the description. The '[DISCOVERY]' prefix appears to be metadata noise rather than helpful context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool, the description partially compensates for missing output schema by documenting return structure inline, though the JSON dump formatting is poor. Core functionality is covered but presentation hinders comprehension.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters (address_hash and type) documented. Description mentions 'filtering' which maps to the 'type' parameter accepting ERC standards, but adds minimal semantic detail beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it retrieves 'token balances with filtering and pagination' but fails to distinguish from sibling tool 'get_address_token_balances', creating potential confusion about which to use for specific token types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'get_address_token_balances', 'get_address_nft', or 'get_address_nft_collections', leaving the agent to guess based on naming alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_token_transfersCInspect
get address token transfers Returns: { items: { token_type: "ERC-20" | "ERC-721" | "ERC-1155" | "ERC-404", block_hash: string, from: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, log_index: number, method: string, timestamp: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | query parameter: type (string) | |
| token | No | query parameter: token (string) | |
| filter | No | query parameter: filter (string) | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It documents the return structure (items array with token types, next_page_params for pagination), which compensates for the lack of output schema. However, it omits details on data availability, rate limits, or behavior with high-volume addresses.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured, concatenating the action statement with a large JSON blob representing return values without formatting. The return type documentation, while valuable, is verbose and obscures the primary purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description partially compensates by documenting return fields and pagination. However, for a 4-parameter blockchain query tool, it lacks usage context, filtering examples, and behavioral constraints needed for reliable invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description does not add semantic meaning beyond the schema (e.g., explaining that 'filter' controls to/from direction, or that 'token' is a contract address filter).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'address token transfers' (tautological with the name) but clarifies scope via the return type structure showing support for ERC-20/721/1155/404 tokens. However, it fails to distinguish from siblings like get_token_transfers or get_transaction_token_transfers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives (e.g., get_address_txs for native transfers, get_token_transfers for contract-wide history), nor prerequisites like address format requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_txsCInspect
get address transactions Returns: { items: { timestamp: string, fee: { type: unknown, value: unknown }, gas_limit: number, block_number: number, status: string, method: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| filter | No | query parameter: filter (string) | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations exist, the description carries the full burden. It discloses the return structure including pagination (next_page_params) and transaction item schema, but lacks information on rate limits, authentication requirements, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While compact in length, the description is a single dense blob with raw JSON embedded in the text, making it difficult to parse. The key return value information is not front-loaded effectively and requires reading through brace notation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description compensates for the lack of a structured output schema by including the return structure inline. However, given the large number of sibling tools with similar names, the description should do more to clarify scope and filtering capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description does not add any additional parameter context (e.g., explaining the 'to | from' filter syntax or address_hash format), but the schema is self-sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with a tautology ('get address transactions' restates the tool name), but the embedded JSON return structure clarifies that this returns external transactions with fields like gas_limit, block_number, and fee, distinguishing it from siblings like get_address_internal_txs or get_address_token_transfers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the many sibling alternatives (get_address_internal_txs, get_address_token_transfers, get_txs, etc.) or what the 'filter' parameter values 'to' and 'from' imply for usage patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_withdrawalsCInspect
get address withdrawals Returns: { items: { index: number, amount: string, validator_index: number, receiver: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, block_number: number, timestamp: string }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It includes the complete return schema structure (items array with validator_index, receiver objects, pagination via next_page_params), which reveals this handles validator withdrawal data. However, it lacks operational details like rate limits, auth requirements, or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single run-on sentence mixing purpose and return schema. The raw JSON-like dump of return fields is hard to parse and not front-loaded; critical usage context is buried in unstructured schema notation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a blockchain tool with complex return types (nested receiver objects, pagination), the description provides the raw return structure but lacks domain context. No explanation of what constitutes a withdrawal, how to use next_page_params, or relationship to the validator lifecycle.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('Address hash'), establishing baseline 3. The description adds no additional semantic context about the address format (e.g., 0x-prefixed Ethereum address) or what address this refers to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description restates the tool name ('get address withdrawals') without clarifying what 'withdrawals' refers to (Ethereum validator withdrawals). It fails to distinguish from siblings like 'get_withdrawals' or 'get_block_withdrawals', leaving the agent to infer scope from the return schema details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the sibling 'get_withdrawals' or 'get_block_withdrawals'. No prerequisites or conditions for use are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_blockCInspect
get block info Returns: { base_fee_per_gas: string, burnt_fees: string, burnt_fees_percentage: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| block_number_or_hash | Yes | Block number or hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure. It compensates for the lack of output schema by documenting the return structure with specific fields, which is valuable. However, it omits error handling behavior, rate limits, or what happens when a block is not found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single compact line combining purpose and return value. While brief, the structure is slightly awkward with 'Returns:' crammed inline, and 'get' is unnecessarily lowercase. Generally efficient but could be better structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter lookup tool, the description is minimally viable. The inline return structure partially compensates for the lack of output schema, though error cases and sibling distinctions would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('Block number or hash'). The description does not add additional parameter semantics (format details, examples), so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'block info' and lists specific return fields (base_fee_per_gas, burnt_fees, etc.), which adds specificity. However, 'info' remains vague and it fails to distinguish from sibling tool 'get_blocks' (plural list vs. singular retrieval).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'get_blocks' or 'get_block_txs'. No prerequisites or filtering guidance mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_blocksCInspect
[DISCOVERY] get blocks Returns: { items: { base_fee_per_gas: string, burnt_fees: string, burnt_fees_percentage: number, difficulty: string, extra_data: string, gas_limit: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | query parameter: type (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description discloses the return structure including pagination fields (next_page_params) and sample data fields, which adds necessary behavioral context. However, it omits safety profile, rate limits, and whether this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The '[DISCOVERY]' prefix is metadata noise. While the return value documentation is informative, the single-sentence structure is cramped and the truncation ('...') reduces usefulness. Not efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so documenting the return structure in the description is necessary and partially accomplished. However, it fails to explain the semantic difference between 'uncle' and 'block' types and truncates the fields list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with examples showing 'block | uncle | reorg' values. The description focuses entirely on return values and adds no explanatory context for what these type values mean or when to use them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'get blocks' but is vague about operational scope (listing vs filtering). The return structure implies a list operation, but the description fails to explicitly distinguish this from the sibling 'get_block' (singular) tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'get_block' or 'get_block_txs'. Omits instructions on pagination using 'next_page_params' or the significance of the 'type' parameter values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_block_txsBInspect
get block transactions Returns: { items: { timestamp: string, fee: { type: unknown, value: unknown }, gas_limit: number, block_number: number, status: string, method: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| block_number_or_hash | Yes | Block number or hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It includes a 'Returns:' section documenting the output structure including pagination (next_page_params) and item fields, which adds value. However, it lacks disclosure on safety (read-only status), idempotency, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but awkwardly structured with the return value documentation appended inline after the purpose statement. While informative, the JSON-like return structure is dense and could be better formatted, though all content is relevant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates by documenting the return structure including the items array schema and pagination parameters. For a simple single-parameter retrieval tool, this provides adequate completeness despite minimal behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single parameter 'block_number_or_hash', the baseline is 3. The description text adds no additional semantic detail, examples, or format guidance beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'get block transactions' which clearly identifies the verb (get) and resource (block transactions). It implicitly distinguishes from siblings like get_tx (single transaction) and get_block (block metadata) by specifying the plural 'transactions' tied to a block, though it could explicitly clarify this distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives like get_txs (which likely returns recent transactions across all blocks) or get_address_txs. No mention of prerequisites or filtering capabilities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_block_withdrawalsCInspect
get block withdrawals Returns: { items: { index: number, amount: string, validator_index: number, receiver: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, block_number: number, timestamp: string }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| block_number_or_hash | Yes | Block number or hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It includes the return structure with specific fields (items array with receiver objects, next_page_params), which helps understand the response shape, but lacks information on pagination behavior, error conditions, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured as a single run-on sentence with inline JSON blob formatting, making it difficult to parse. While the return value information is valuable, the presentation is dense and not front-loaded effectively; the JSON structure should ideally be separated or formatted for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter and no structured output schema, the description compensates somewhat by detailing the return structure. However, it lacks domain context (e.g., explaining these are Ethereum validator stake withdrawals) and omits behavioral details like pagination limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for block_number_or_hash. The description adds no additional semantic context about the parameter format (e.g., hex vs decimal for hashes) beyond what the schema already states, warranting the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'get block withdrawals' which essentially restates the tool name in lowercase, but partially redeems itself by including the return structure showing specific fields like validator_index and amount. However, it fails to distinguish from siblings like get_withdrawals or get_address_withdrawals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus the sibling withdrawal tools (get_withdrawals, get_address_withdrawals) or prerequisites like block existence. The description jumps immediately to return values without contextual usage hints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_indexing_statusBInspect
[DISCOVERY] get indexing status Returns: { finished_indexing: boolean, finished_indexing_blocks: boolean, indexed_blocks_ratio: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It compensates effectively by disclosing the return structure (booleans for completion status and a ratio string), which reveals this is a read-only status check without side effects, though it omits caching behavior or polling frequency guidance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently single-sentence, but the '[DISCOVERY]' prefix appears to be internal metadata leakage that adds noise. The return type documentation is appropriately detailed for the space used.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no params) and lack of structured output schema, documenting the return fields in the description provides essential completeness. However, it lacks error state documentation or guidance on interpreting the indexed_blocks_ratio string format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the rubric, 0-parameter tools receive a baseline score of 4, as there are no semantics to clarify beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The phrase 'get indexing status' largely restates the tool name, which is tautological. However, the return value documentation (finished_indexing, indexed_blocks_ratio) provides sufficient context to infer this checks blockchain synchronization progress, distinguishing it from data-retrieval siblings like get_block.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to invoke this versus alternatives (e.g., health_check) or prerequisites. The description implies it's for checking sync state but doesn't state when an agent should poll this versus proceeding with data queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_internal_transactionsCInspect
[DISCOVERY] get internal transactions Returns: { items: { block_number: number, created_contract: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, error: string, from: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, gas_limit: string, index: number, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but only supplies a partial return schema. It omits critical behavioral details such as pagination mechanics (despite showing next_page_params in the return), result set limits, temporal scope (recent vs historical), and any rate limiting or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured as a single dense blob starting with the noise tag '[DISCOVERY]' followed by a tautology and a truncated JSON return dump. The return schema information, while valuable, is not presented cleanly and the sentence does not earn its place as a coherent description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a blockchain data tool with implied pagination and numerous sibling filtering tools, the description is incomplete. It fails to explain the unbounded nature of the query, how to handle pagination with next_page_params, or how this global fetch relates to the more specific address/transaction variants.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4 per the evaluation rules. The description does not explicitly address the absence of parameters or clarify whether this implies a fixed query scope, but no parameters exist to document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'get internal transactions' restates the function name as a tautology. While the return structure hints at the data returned, it fails to distinguish this tool from siblings like 'get_address_internal_txs' or 'get_transaction_internal_txs', leaving the specific scope (global vs filtered) ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided regarding when to use this tool versus the address-specific or transaction-specific alternatives. The description lacks any mention of prerequisites, filtering capabilities, or recommended use cases for fetching unbounded internal transaction data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_main_page_tokensCInspect
[DISCOVERY] get main page blocks — from Blockscout
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden but only minimally succeeds. It identifies Blockscout as the data source (implying external API dependency), and the '[DISCOVERY]' prefix hints at read-only exploration, but it fails to disclose return format, pagination, caching behavior, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (9 words), but the confusion between 'blocks' and 'tokens' wastes interpretive effort. The '[DISCOVERY]' prefix categorizes but doesn't clarify functionality. Appropriately sized but content quality reduces structural effectiveness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema provided and no annotations, the description should explain what data is returned (e.g., trending tokens, main page token list). It fails to do so, leaving the agent unaware of the return structure or content semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters (empty input schema), establishing baseline score 4. There are no parameters requiring semantic clarification beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description contains a critical mismatch: it states 'get main page blocks' while the tool name is 'get_main_page_tokens' (and sibling 'get_blocks' exists). This confuses 'blocks' (as in blockchain blocks) with 'tokens'. It mentions 'from Blockscout' which provides some resource context, but the core verb+resource description is misleading.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus siblings like 'get_tokens_list' or 'get_main_page_txs'. No prerequisites, filtering capabilities, or use-case scenarios are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_main_page_txsCInspect
[DISCOVERY] get main page transactions
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether results are paginated, cached, or time-bound (e.g., recent transactions), and omits safety/permission details beyond the implied read-only nature of 'get.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, but the '[DISCOVERY]' prefix appears to be metadata or categorization noise that does not add semantic value for tool selection. The remaining text is efficient but overly terse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, and the existence of many similar transaction-listing siblings, the description should explain what 'main page' signifies (e.g., recent transactions for a dashboard). Its failure to do so leaves a significant gap in contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the evaluation rules, this establishes a baseline score of 4, as there are no parameter semantics to clarify beyond the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'main page transactions,' which identifies the resource but uses ambiguous jargon ('main page') without defining what constitutes a main page or how these transactions differ from those returned by siblings like get_txs, get_block_txs, or get_address_txs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'main page' implies a dashboard/overview context, the description provides no explicit guidance on when to use this tool versus the numerous sibling transaction-fetching tools (e.g., get_txs for general queries, get_block_txs for block-specific data). No alternatives or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_chartCInspect
get market chart Returns: { available_supply: string, chart_data: { date: string, closing_price: string, market_cap: string }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry full behavioral disclosure burden. It documents the return structure (available_supply, chart_data array contents) which compensates partially for the missing output schema, but omits critical behavioral traits like caching, rate limits, data freshness, or whether this requires authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Poorly structured with redundant opening ('get market chart' repeats the tool name) and awkward inline JSON notation for return values ('Returns: { ... }'). While brief, the formatting wastes space and reduces readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero input parameters and no formal output schema, the description minimally compensates by describing the return payload structure. However, it lacks crucial context about which market's data is returned and fails to explain the time range or granularity of the chart_data (daily? hourly?).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, warranting a baseline score of 4. The description does not need to compensate for parameter documentation since there are none to describe.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description restates the tool name ('get market chart') which is tautological. While it lists return fields (closing_price, market_cap), it fails to specify which market (global crypto market? specific token?) and does not distinguish from sibling tool 'get_txs_chart', leaving the scope ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives like 'get_txs_chart' or 'get_main_page_tokens'. No prerequisites, filtering options, or contextual recommendations are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nft_instanceAInspect
get NFT instance by id Returns: { is_unique: boolean, id: string, holder_address_hash: string, ... }.
PREREQUISITE: You MUST first call a listing/map endpoint to resolve asset identifiers (id, slug, symbol) before calling data endpoints.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | integer id | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It partially discloses the response structure (is_unique, id, holder_address_hash) and the workflow prerequisite, but does not clarify rate limits, error conditions, or explicitly confirm this is read-only (though implied by 'get').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately compact with two distinct informational units (purpose/returns and prerequisites). Minor formatting issues exist (lowercase start, Returns clause run-on), but there is no extraneous content and the prerequisite warning is appropriately emphasized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter retrieval tool without annotations, the description adequately covers the critical workflow constraint (prerequisite call requirement) and partially documents the return structure. It appropriately offloads parameter details to the well-documented schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both id and address_hash have descriptions), establishing a baseline of 3. The description mentions 'by id' but does not add semantic meaning beyond the schema for the address_hash parameter or explain the relationship between the two required parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves an NFT instance by ID and documents the return fields (is_unique, holder_address_hash). While 'get NFT instance by id' somewhat restates the tool name, the addition of return structure and the singular 'id' distinguishes it from the plural sibling get_nft_instances.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The PREREQUISITE section provides explicit workflow guidance: you MUST first call listing/map endpoints to resolve identifiers before using this data endpoint. This clearly establishes when to use this tool versus alternative listing endpoints and defines the required call sequence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nft_instancesCInspect
get NFT instances Returns: { items: { is_unique: boolean, id: string, holder_address_hash: string, image_url: string, animation_url: string, external_app_url: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full behavioral disclosure burden. It reveals pagination behavior via next_page_params and item structure, but fails to indicate the read-only/safe nature of the operation, rate limits, or error conditions. The safety profile must be inferred from the word 'get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact but poorly structured, leading with a tautology ('get NFT instances') before dumping a JSON-like return specification. The return structure documentation earns its place, but the opening phrase wastes space without adding meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter query tool with no output schema, the description partially compensates by documenting return fields and pagination. However, it lacks critical contextual information: the tool's actual purpose, the nature of the address parameter, and differentiation from similar tools in the extensive sibling list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single address_hash parameter, the baseline is 3. The description does not add semantic context (e.g., clarifying this is a contract address vs. holder address), but the schema adequately documents the input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description tautologically restates the tool name ('get NFT instances') and fails to distinguish from siblings like get_nft_instance (singular) or get_address_nft. While it documents return fields, it never explains what an 'NFT instance' represents (e.g., a token within an ERC-721 collection) or what the query achieves.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the singular get_nft_instance or address-specific NFT tools. No mention of prerequisites such as whether the address_hash should be a contract address or holder address.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nft_instance_transfersAInspect
get transfers of NFT instance Returns: { items: { token_type: "ERC-20" | "ERC-721" | "ERC-1155" | "ERC-404", block_hash: string, from: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, log_index: number, method: string, timestamp: string, ... }[], next_page_params: object }.
PREREQUISITE: You MUST first call a listing/map endpoint to resolve asset identifiers (id, slug, symbol) before calling data endpoints.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | integer id | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It documents the return structure (items array with token types, block hashes, etc.) which compensates for the lack of output schema, but it does not disclose safety properties (read-only status), rate limits, or pagination behavior beyond mentioning 'next_page_params' in the return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the action but suffers from poor formatting: the 'Returns:' clause creates a run-on sentence with dense JSON-like syntax that harms readability. The PREREQUISITE section is appropriately separated and emphasized. Could be improved by separating return documentation or using cleaner formatting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with full schema coverage, the description is reasonably complete. It compensates well for the missing output schema by inline documentation of return fields, and provides critical prerequisite context that would otherwise block successful usage. Missing only minor behavioral details like rate limiting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('integer id', 'Address hash'), establishing a baseline of 3. The description adds marginal context by mentioning 'asset identifiers (id, slug, symbol)' in the prerequisite, but does not elaborate on parameter formats or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'get transfers of NFT instance' with a specific verb and resource. However, it lacks explicit differentiation from sibling tool 'get_nft_instance_transfers_count' (which returns aggregate counts vs. detailed records), though this is somewhat implied by the plural 'transfers'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The PREREQUISITE section provides explicit guidance that you MUST first call a listing/map endpoint to resolve identifiers before using this data endpoint. This clearly defines when to use the tool. However, it does not mention when to prefer the '_count' sibling tool for aggregate data only.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nft_instance_transfers_countAInspect
get transfers count of NFT instance Returns: { transfers_count: number }.
PREREQUISITE: You MUST first call a listing/map endpoint to resolve asset identifiers (id, slug, symbol) before calling data endpoints.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | integer id | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the return format ('Returns: { transfers_count: number }') which substitutes for the missing output_schema. However, it lacks disclosure of read-only nature, error handling (e.g., non-existent instance), idempotency, or rate limiting characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient two-section structure: purpose/return value front-loaded, followed by clearly demarcated PREREQUISITE. No redundant text; every sentence serves identification or operational guidance. Appropriate length for a two-parameter query tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple count retrieval with two parameters and no output schema, the description is nearly complete. It compensates for missing output_schema with explicit return type documentation and includes critical prerequisite warnings. Minor gap: does not specify what timeframe the count covers (all-time vs. recent) or error response behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('integer id', 'Address hash'), establishing baseline 3. The description adds minimal param semantics beyond the schema, though the prerequisite section implies these parameters require prior resolution via listing endpoints, adding indirect context about identifier sourcing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('get') + resource ('transfers count') + scope ('of NFT instance'). The term 'count' clearly distinguishes this from sibling tool 'get_nft_instance_transfers' (which likely returns transfer records), establishing precise intent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit PREREQUISITE section mandates calling 'a listing/map endpoint' first to resolve identifiers, providing clear sequencing guidance. It explicitly states 'You MUST' and references prerequisite actions, creating unambiguous context for when to invoke this tool versus alternative discovery endpoints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_smart_contractCInspect
get smart contract Returns: { verified_twin_address_hash: string, is_blueprint: boolean, is_verified: boolean, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It partially compensates by disclosing the return structure (fields like verified_twin_address_hash and is_blueprint), indicating what data the agent will receive. However, it omits operational details such as whether the operation is read-only, idempotent, or what occurs when the address_hash is invalid.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the essential operation and return type. However, the formatting is awkward ('get smart contract Returns:') appearing as a malformed sentence or concatenated metadata, which slightly undermines clarity despite the brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool, the description is minimally adequate. The inline documentation of return fields partially compensates for the lack of a formal output schema, though it could benefit from explanations of what 'is_blueprint' or 'verified_twin_address_hash' represent in this blockchain context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'address_hash' parameter ('Address hash'). The description adds no additional parameter context (e.g., expected format like 0x-prefixed hex, length constraints), warranting the baseline score for complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'get smart contract' which tautologically restates the tool name. While it lists return fields (verified_twin_address_hash, is_blueprint, is_verified), it fails to distinguish this singular lookup from the sibling tool 'get_smart_contracts' (plural) or clarify that this retrieves detailed metadata for a specific contract address.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'get_smart_contracts' or 'get_address'. No prerequisites are mentioned (e.g., whether the address must be verified or exist on the chain), and no error conditions are described.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_smart_contractsBInspect
[DISCOVERY] get verified smart contracts Returns: { items: { address_hash: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, coin_balance: string, compiler_version: string, language: string, has_constructor_args: boolean, optimization_enabled: boolean, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | query parameter: q (string) | |
| filter | No | query parameter: filter (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It compensates by detailing the return structure (pagination via next_page_params, fields like compiler_version, language, coin_balance), but omits operational traits like rate limits, caching behavior, or safety guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded but poorly structured as a single dense string mixing metadata tags, purpose, and inline JSON return structure. The return type documentation, while helpful, harms readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately compensates for the missing output schema by including the full return structure with nested object details. Given the simple 2-parameter input schema, this provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with basic type info ('query parameter: q (string)'). The description adds no parameter semantics beyond the schema, meeting the baseline for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it retrieves 'verified smart contracts' with a specific [DISCOVERY] categorization, distinguishing it from the singular 'get_smart_contract' sibling. However, the '[DISCOVERY]' prefix is structural noise and it doesn't explicitly clarify list vs. search semantics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this plural listing tool versus the singular 'get_smart_contract', 'search', or 'get_smart_contracts_counters' siblings. No prerequisites, filters usage explanation, or alternative recommendations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_smart_contracts_countersBInspect
[DISCOVERY] get verified smart contracts counters Returns: { new_smart_contracts_24h: string, new_verified_smart_contracts_24h: string, smart_contracts: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return structure informally (listing fields like 'new_smart_contracts_24h'), which helps, but lacks operational details such as caching behavior, data freshness, or whether this is a lightweight vs. expensive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but includes the '[DISCOVERY]' prefix which appears to be internal metadata rather than user-facing documentation, cluttering the opening. The return value description is embedded in the same sentence rather than structured separately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters) and lack of output schema, the description adequately compensates by listing the return fields. However, the informal 'Returns: {...}' format is less helpful than a structured description, and additional context about data sources or update frequency is absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4. The description correctly implies no filtering is needed by omitting parameter discussion, consistent with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'verified smart contracts counters' with a specific verb and resource type. It distinguishes itself from siblings like 'get_smart_contract' and 'get_smart_contracts' by specifying 'counters'. However, the '[DISCOVERY]' prefix appears to be metadata noise that slightly obscures the purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'get_stats' or 'get_smart_contracts'. There is no mention of prerequisites, rate limits, or specific scenarios where counter statistics are preferred over full contract listings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statsBInspect
[DISCOVERY] get stats counters Returns: { total_blocks: string, total_addresses: string, total_transactions: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return structure with specific fields, which compensates for the lack of output schema. However, it omits other behavioral traits like caching, rate limits, or whether data is real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but awkwardly structured with the '[DISCOVERY]' prefix tag and a run-on sentence combining the action and return value. The information density is good, but the formatting reduces clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameterless tool, the description adequately documents the return structure inline since no output schema exists. However, given the prevalence of similar counter methods among siblings, it should clarify the global scope of these statistics to aid tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters and schema coverage is 100% (trivially). Per the rubric, 0 parameters establishes a baseline score of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'stats counters' and lists specific return fields (total_blocks, total_addresses, total_transactions), indicating global network statistics. However, it does not explicitly distinguish this from sibling counter tools like get_address_counters or get_token_counters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Given siblings like get_address_counters and get_smart_contracts_counters, the description should clarify this returns global/network-wide stats rather than entity-specific counters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tokenCInspect
get token info Returns: { circulating_market_cap: string, icon_url: string, name: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Documents the return object structure with specific fields since no output schema exists, which is helpful. However, lacks other behavioral context required without annotations: no mention of caching, rate limits, error cases (e.g., invalid address format), or whether this is a read-only operation (though implied by 'get').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief (one sentence fragment), which avoids verbosity but conflates purpose and return value documentation into a single run-on structure. The return value documentation appears as raw JSON/object notation rather than descriptive prose, reducing readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter lookup tool with no output schema. The return value documentation compensates partially for missing structured output, but the description could clarify that address_hash refers to the token contract address specifically, and explain what 'info' encompasses beyond the three example fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('Address hash'), establishing a baseline of 3. The description adds no supplementary context about what the address represents (e.g., ERC-20 token contract address) or expected format (0x prefix, checksum), relying entirely on the schema's minimal description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the tool retrieves 'token info' and lists specific return fields (circulating_market_cap, icon_url, name), but uses vague phrasing that doesn't distinguish from siblings like get_tokens_list or get_token_counters. The scope (single token lookup vs. list) is implied only by the input parameter, not the description text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like get_tokens_list, get_token_counters, or get_address_token_balances. No mention of prerequisites (e.g., needing the token contract address) or when this specific endpoint is preferred over token-related siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_countersCInspect
get token holders Returns: { token_holders_count: string, transfers_count: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It compensates for the lack of output schema by specifying the exact return structure with field types ({ token_holders_count: string, transfers_count: string }), which is valuable. However, it omits whether these are all-time counts, time-windowed statistics, or require specific permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (one sentence), but poorly structured with a lowercase start, abrupt transition from 'holders' to 'Returns:', and informal JSON-like notation for return values. While not verbose, the awkward phrasing reduces clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with simple scalar returns, the description minimally suffices by providing the return shape. However, it fails to resolve the critical ambiguity with sibling tools or explain the scope of the counters (e.g., total historical vs. current). The confusion between 'holders' (list) and 'counters' (statistics) leaves significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (address_hash is documented as 'Address hash'), establishing a baseline of 3. The description adds no additional semantic context about whether this address refers to the token contract address, a holder address, or another entity, leaving ambiguity that the schema description does not resolve.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'get token holders' which contradicts the tool name 'get_token_counters' and confuses it with the sibling tool 'get_token_holders'. While the return value specification clarifies it returns counts (token_holders_count, transfers_count) rather than holder records, the initial phrasing is misleading and fails to clearly distinguish this statistical tool from the list-fetching alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this counter/statistics tool versus the sibling 'get_token_holders' (which likely returns the actual list of holders) or 'get_token' (which returns token metadata). No prerequisites, filtering options, or error conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_holdersDInspect
get token holders Returns: { items: { address_hash: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, value: string, token_id: string }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It includes a raw dump of the return structure showing 'items' with 'address_hash', 'value', and 'token_id', which hints at pagination capability. However, it lacks explanation of pagination mechanics, rate limits, or whether this reads from cache or live blockchain state.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description mixes an informal lowercase purpose statement with a messy JSON-like return type dump ('Returns: { items...'). The structure is not front-loaded; return value documentation dominates over purpose explanation, and the truncation ('...') suggests copy-pasted auto-generated content without refinement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a blockchain data retrieval tool with complex nested return data, the description inadequately explains the token standards supported or how to consume 'next_page_params' for pagination. It fails to leverage the informal return structure to explain what 'value' and 'token_id' represent in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100%, the schema merely describes the parameter as 'Address hash' which is tautological. The description fails to clarify that this should be a token contract address (not a user address), leaving critical semantic ambiguity despite the technical coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description tautologically restates the tool name ('get token holders') without specifying what token standard (ERC-20, ERC-721) or distinguishing from the sibling tool 'get_token_instance_holders'. It fails to establish clear scope or resource boundaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'get_token_instance_holders' or 'get_address_token_balances'. No mention of prerequisites (e.g., requiring a token contract address vs. holder address) or pagination workflow despite the 'next_page_params' in the return structure.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_instance_holdersBInspect
get token instance holders Returns: { items: { address_hash: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, value: string, token_id: string }[], next_page_params: object }.
PREREQUISITE: You MUST first call a listing/map endpoint to resolve asset identifiers (id, slug, symbol) before calling data endpoints.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | integer id | |
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It compensates by including the return structure (JSON schema with items, address_hash details, token_id, and next_page_params), which is critical since no output_schema exists. However, it lacks disclosure of rate limits, caching behavior, or permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description suffers from poor formatting: it embeds a raw JSON blob for return types inline, uses inconsistent casing, and starts with a redundant phrase. The prerequisite is clearly marked but the overall structure is messy and hard to scan.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately compensates by detailing the return structure (items array with holder addresses and pagination). However, with messy formatting and no mention of error cases or rate limiting, it meets minimum viability but doesn't excel.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% ('integer id', 'Address hash'), so the baseline applies. The description adds minimal semantic value beyond the schema, though the prerequisite section clarifies that 'id' refers to asset identifiers that must be resolved via listing endpoints first.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a tautological lowercase version of the tool name ('get token instance holders'). However, it distinguishes from sibling 'get_token_holders' by implying NFT/ERC-1155 specificity through the 'token_id' field shown in the return structure. The purpose is decipherable but not crisply stated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit prerequisite in all caps: 'You MUST first call a listing/map endpoint to resolve asset identifiers... before calling data endpoints.' This clearly establishes workflow requirements and dependencies, though it doesn't name specific sibling tools as alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tokens_listBInspect
[DISCOVERY] get tokens list Returns: { items: { circulating_market_cap: string, icon_url: string, name: string, decimals: string, symbol: string, address_hash: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | query parameter: q (string) | |
| type | No | query parameter: type (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the return structure (items array with specific fields, pagination object), which helps. However, it omits safety info (read-only status), rate limits, or pagination mechanics despite mentioning next_page_params.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence efficiently packs categorization ([DISCOVERY]), purpose, and return structure. No wasted words, though the return JSON makes it lengthy, it earns its place given the lack of output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter discovery tool with no output schema, the inline return documentation provides necessary completeness. However, gaps remain in explaining parameter interaction (can q and type be used together?) and pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with mechanical descriptions ('query parameter: q'). The description adds no parameter guidance (e.g., whether 'q' does fuzzy search or exact match, valid values for 'type'). Baseline 3 is appropriate given schema coverage meets minimum needs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The '[DISCOVERY]' tag and 'get tokens list' phrase clearly identify this as a listing/discovery tool. The inline return structure (items with circulating_market_cap, icon_url, etc.) clarifies scope, distinguishing it from sibling 'get_token' (single lookup) or 'get_address_tokens' (user-specific balances).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs. siblings like 'get_main_page_tokens' or 'search'. No explanation of how to combine the 'q' and 'type' parameters or when pagination (next_page_params) is needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_token_transfersCInspect
get token token transfers Returns: { items: { token_type: "ERC-20" | "ERC-721" | "ERC-1155" | "ERC-404", block_hash: string, from: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, log_index: number, method: string, timestamp: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| address_hash | Yes | Address hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It partially compensates by embedding the return structure (items array with token_type, from address object, next_page_params), revealing pagination support and data shape. However, it omits read-only status, rate limits, and error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Poorly structured with tautological front-loading ('get token token transfers') followed by a dense JSON-like return schema dump. The format is difficult to parse and mixes action description with type definitions inefficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the inline return schema compensates for the lack of structured output schema, the description is critically incomplete given the tool ecosystem. With four similar transfer endpoints available, the failure to specify this tool's unique scope (token-specific vs address-specific vs transaction-specific) creates significant selection ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (single parameter 'address_hash' is documented as 'Address hash'). The description adds no additional semantics to clarify whether this hash represents a token contract address, user address, or either, relying entirely on the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'get token token transfers' which tautologically restates the tool name. While it includes return type information showing ERC-20/ERC-721 transfer data, it fails to distinguish this tool from similar siblings (get_address_token_transfers, get_token_transfers, get_transaction_token_transfers) or clarify what the 'token token' scope refers to.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the three other token transfer endpoints (get_address_token_transfers, get_token_transfers, get_transaction_token_transfers). Does not clarify whether address_hash refers to a token contract or user wallet, nor are prerequisites or filtering capabilities mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_transfersCInspect
[DISCOVERY] get token transfers Returns: { items: { token_type: "ERC-20" | "ERC-721" | "ERC-1155" | "ERC-404", block_hash: string, from: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, log_index: number, method: string, timestamp: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It documents the return structure (items array with token details, from addresses, timestamps) and hints at pagination via next_page_params, which is valuable. However, it omits operational details like result set limits, default time ranges, rate limits, or whether this queries the full blockchain history or just recent blocks.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Poor structure: prepends mysterious '[DISCOVERY]' tag and immediately dumps inline JSON schema without explanatory context. While not verbose, the dense unformatted object dump hinders readability. The return structure documentation would be better served by actual output schema or cleaner formatting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Critical gaps remain: no explanation of what determines the returned transfer set (all time? last 24h? block range?), no differentiation from sibling tools, and no mention of API limits. The inline return schema partially compensates for lack of formal output schema but is insufficient for a discovery tool with ambiguous scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (empty object). Description correctly implies no input parameters are needed by focusing entirely on return values. With 0 parameters, baseline score applies as no parameter explanation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'get token transfers' with return type details (ERC-20/721/1155/404), but fails to distinguish from siblings like get_address_token_transfers or get_transaction_token_transfers. Doesn't clarify whether this returns global recent transfers, filtered results, or requires specific context to function despite having zero parameters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the four sibling transfer tools (get_address_token_transfers, get_transaction_token_transfers, get_token_token_transfers, get_nft_instance_transfers). No mention of prerequisites, default filtering behavior, or pagination requirements despite the next_page_params return field.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_internal_txsCInspect
get transaction internal transactions Returns: { items: { block_number: number, created_contract: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, error: string, from: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, gas_limit: string, index: number, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_hash | Yes | Transaction hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It partially compensates by detailing the return structure (items array with block_number, created_contract, error fields, pagination via next_page_params), but omits operational context like rate limits, error handling behavior, or whether this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured as a dense, single-run sentence containing a pseudo-JSON return type dump. The 'Returns:' information is valuable but presented as an unreadable inline object literal rather than structured documentation, hurting scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking a formal output schema, the description includes a detailed pseudo-schema of the return value, which helps agents understand the nested structure of internal transactions. However, it lacks domain context explaining that internal transactions represent contract calls triggered by the main transaction.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (transaction_hash is documented). The description adds no further semantic details about the parameter format (e.g., 0x-prefix requirements, hex length) or examples, meeting the baseline expectation when the schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The opening phrase 'get transaction internal transactions' is tautological, merely restating the tool name with spaces added. While the 'Returns:' section clarifies the data structure, the description fails to distinguish this tool from siblings like 'get_internal_transactions' or explain what constitutes an 'internal transaction' in the blockchain domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. Given siblings like 'get_address_internal_txs' (which filters by address) and 'get_internal_transactions' (likely a broader query), the description should specify this tool requires a specific transaction hash and retrieves internal calls within that single transaction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_logsDInspect
get transaction logs Returns: { items: { address_hash: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, block_hash: string, block_number: number, data: string, decoded: { method_call: unknown, method_id: unknown, parameters: unknown }, index: number, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_hash | Yes | Transaction hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but discloses minimal behavioral traits. It does not indicate whether the operation is read-only, idempotent, or requires authentication. The description dumps a partial return type structure (items, address_hash fields) but uses unhelpful 'unknown' types without explaining what data is actually returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured as a run-on text starting with a lowercase verb. It inappropriately embeds a complex JSON-like return type definition ('Returns: { items: { address_hash...') within the description text, creating noise without adding clarity. This is not conciseness but under-formatting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter but numerous conceptual siblings in the blockchain explorer domain, the description fails to provide necessary comparative context. While it attempts to describe return data (compensating for lack of output schema), the format is unusable and doesn't explain pagination (next_page_params) or the relationship between items.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'transaction_hash' parameter. The description adds no additional semantic context about the parameter format or constraints, meeting the baseline of 3 for well-documented schemas but not exceeding it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'get transaction logs' which tautologically restates the tool name. It fails to distinguish this tool from siblings like 'get_address_logs' or 'get_transaction_internal_txs', leaving the agent uncertain whether this retrieves event logs, internal transactions, or trace data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. With numerous sibling tools handling logs, transactions, and addresses, the description offers no selection criteria, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_raw_traceCInspect
get transaction raw trace
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_hash | Yes | Transaction hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but offers only the implied read-only nature from the word 'get'. It fails to explain what a 'raw trace' contains (EVM execution steps, call frames, gas costs), the response format, potential data size, or error conditions for invalid hashes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While extremely brief at only four words, this represents under-specification rather than effective conciseness. The description is front-loaded but fails to earn its place by providing actionable information about the tool's function or distinguishing characteristics.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having only one parameter, the tool operates in a complex blockchain domain where 'raw trace' is a specific technical concept (typically EVM execution traces). The description inadequately explains the output format or content, leaving agents unprepared for what this tool returns compared to siblings that provide processed views of transaction data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with the transaction_hash parameter documented as 'Transaction hash'. The description adds no additional semantic value (format details, 0x prefix requirements, length), but baseline 3 is appropriate given the schema already fully documents the single parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'get transaction raw trace' is a tautology that restates the tool name with spaces added. While it identifies the resource (transaction trace), it fails to distinguish this tool from numerous siblings like get_transaction_internal_txs, get_transaction_logs, or get_tx, leaving the agent uncertain about what makes this 'raw trace' different from other transaction data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidelines are provided. Given the presence of over a dozen sibling tools dealing with transaction data (get_tx, get_transaction_logs, get_transaction_internal_txs, etc.), the absence of when-to-use guidance or differentiation from alternatives forces the agent to guess which tool provides the specific data it needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_state_changesCInspect
get transaction state changes Returns: { items: { token: { circulating_market_cap: unknown, icon_url: unknown, name: unknown, decimals: unknown, symbol: unknown, address_hash: unknown, ... }, type: string, is_miner: boolean, address_hash: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, balance_before: string, balance_after: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_hash | Yes | Transaction hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return structure (including pagination via next_page_params and balance deltas), but offers no explanation of scope limitations, rate limits, or what constitutes a 'state change' in this context (e.g., storage slots vs balances).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one run-on sentence mixing the tautological purpose statement with a dense JSON-like return type dump. This structure is difficult to parse and fails to front-load the most important conceptual information before technical details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description attempts to compensate for the missing output schema by including return type fields (items, token, balance_before, etc.), it lacks conceptual explanation of what the tool returns and why. Given the complex nested return structure shown, the description should clarify what 'state changes' means in the context of blockchain transactions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single transaction_hash parameter. The description adds no additional semantic information about the parameter format or validation rules, meeting the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with a tautology ('get transaction state changes' restates the tool name). While the embedded return type structure hints at functionality (showing balance_before/balance_after fields), it fails to explain what distinguishes 'state changes' from sibling tools like get_transaction_token_transfers or get_transaction_logs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidance is provided. There is no indication of when to use this tool versus the numerous sibling transaction inspection tools (get_transaction_logs, get_internal_transactions, etc.), nor any prerequisites or conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_summaryCInspect
get human-readable transaction summary Returns: { success: boolean, data: { summaries: { summary_template: string, summary_template_variables: unknown }[] } }.
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_hash | Yes | Transaction hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the return structure inline (success flag, data.summaries array with template/variables), which compensates for lack of structured output schema. However, lacks info on auth requirements, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence combines purpose with return type documentation. The inline JSON return structure is informationally dense but creates awkward readability. No wasted words, though structure could be clearer with separation of concerns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex blockchain domain with 50+ sibling tools, the description should clarify when human-readable summaries are preferred over raw data. However, the inclusion of return structure provides essential behavioral context that partially compensates for missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (transaction_hash fully described), establishing baseline 3. Description adds no additional parameter context (format expectations, validation rules, examples), but none is needed given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the tool retrieves a 'human-readable transaction summary' (specific verb + resource), but fails to distinguish from siblings like get_tx, get_transaction_logs, or get_transaction_token_transfers. The 'human-readable' qualifier hints at differentiation but doesn't explicitly contrast with raw data alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the 10+ sibling transaction tools (e.g., get_tx for raw data vs this for summaries). No prerequisites, conditions, or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_token_transfersCInspect
get transaction token transfers Returns: { items: { token_type: "ERC-20" | "ERC-721" | "ERC-1155" | "ERC-404", block_hash: string, from: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, log_index: number, method: string, timestamp: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | query parameter: type (string) | |
| transaction_hash | Yes | Transaction hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return structure including token types (ERC-20, ERC-721, etc.) and pagination hints (next_page_params), but omits information about rate limits, authentication requirements, or read-only safety guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single unstructured blob that awkwardly concatenates the action phrase with a JSON-like return schema dump. This makes it difficult to parse and wastes readability; the return information should ideally be in an output_schema field rather than inline.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description includes return type details (compensating somewhat for the lack of output schema), it lacks critical context given the crowded sibling tool namespace. It does not explain the relationship between this tool and the 40+ related transaction/address/token tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting both the transaction_hash and type parameters. The description adds no additional parameter context, but the baseline score of 3 is appropriate given the schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves token transfers ('get transaction token transfers'), which is a clear verb-resource combination. However, it fails to differentiate from siblings like get_address_token_transfers or get_token_transfers, and doesn't clarify that it filters specifically by transaction hash.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as get_address_token_transfers or get_token_transfers. There are no prerequisites, exclusion criteria, or workflow recommendations mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_txCInspect
get transaction info Returns: { timestamp: string, fee: { type: string, value: string }, gas_limit: number, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| transaction_hash | Yes | Transaction hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full disclosure burden. It helpfully documents the return structure (timestamp, fee object, gas_limit) which compensates for the missing output schema. However, it omits other critical behavioral traits: read-only status, idempotency, error handling (e.g., invalid hash format), or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the action, but suffers from awkward structure (lowercase start, colon usage) by embedding JSON-like return syntax within the sentence. While information-dense, the formatting slightly hinders readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single required parameter) and lack of output schema, the description adequately covers the essentials: purpose, input requirement, and return value structure. For a simple lookup operation, this is sufficient context despite missing usage guidelines.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (transaction_hash is well-described in the schema), the description meets the baseline expectation. The description text does not add additional semantic context about the parameter (e.g., expected format, example hashes), but the schema is sufficient for this single-parameter tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic purpose ('get transaction info') and identifies the resource (transaction), but fails to differentiate from siblings like get_transaction_summary, get_transaction_logs, or get_transaction_token_transfers. The return value hints at core transaction data (timestamp, fee, gas_limit), but doesn't explicitly clarify scope versus alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the numerous sibling transaction-related tools (get_transaction_summary, get_transaction_internal_txs, etc.). No prerequisites, filters, or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_txsBInspect
[DISCOVERY] get transactions Returns: { items: { timestamp: string, fee: { type: unknown, value: unknown }, gas_limit: number, block_number: number, status: string, method: string, ... }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | query parameter: type (string) | |
| filter | No | query parameter: filter (string) | |
| method | No | query parameter: method (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and provides the complete return structure including pagination fields (next_page_params) and item schema. However, it omits rate limits, auth requirements, and whether results are real-time or cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single-line format efficiently packs the return schema documentation inline. While dense, every element serves a purpose—identifying the discovery nature, action, and response format without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation given the return structure is documented and all parameters have schema entries. However, lacks critical differentiation from sibling transaction tools and doesn't explain the optional parameter behavior (all 3 params are optional).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with basic param names and examples. The description adds no additional semantic context about how the type, filter, and method parameters interact or their valid combinations, relying entirely on schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it 'get transactions' but does not distinguish this tool from numerous siblings like get_address_txs, get_block_txs, or get_main_page_txs. The [DISCOVERY] prefix hints at broad listing functionality but lacks explicit scope definition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives such as get_address_txs (address-filtered) or get_block_txs (block-filtered). No mention of prerequisites or filtering strategies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_txs_chartBInspect
get transactions chart Returns: { chart_data: { date: string, transactions_count: number }[] }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It discloses the return structure (array of date/count objects) which compensates for the lack of formal output schema, but omits operational details like time range limits, caching behavior, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the action. The inline return type specification is efficiently appended, though the formatting ('Returns:') is slightly informal. No extraneous content is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter endpoint, describing the return structure is the primary requirement, which is satisfied. However, it lacks critical context for a chart endpoint: the time range (e.g., last 30 days, all-time) and granularity (daily, hourly) are not specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, establishing a baseline score of 4 per evaluation rules. No parameter documentation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the verb ('get') and resource ('transactions chart') and includes return type information. However, it lacks specificity on scope (global vs. filtered) and time granularity, which is ambiguous given sibling tools like get_address_txs and get_market_chart.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like get_market_chart, get_stats, or get_main_page_txs. The agent must infer usage solely from the name and return type.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_withdrawalsCInspect
[DISCOVERY] get withdrawals Returns: { items: { index: number, amount: string, validator_index: number, receiver: { hash: unknown, implementation_name: unknown, name: unknown, ens_domain_name: unknown, metadata: unknown, is_contract: unknown, ... }, block_number: number, timestamp: string }[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the detailed return structure (items with validator_index, receiver, etc.) which adds value, but omits pagination behavior, rate limits, and whether results are sorted (chronologically?).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The inline JSON return type creates poor readability. The [DISCOVERY] tag appears without explanation. While not verbose, the structure is messy and front-loads implementation details over purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a paginated list endpoint returning complex nested objects, the description lacks crucial context: what constitutes a 'withdrawal' (consensus layer withdrawals?), how pagination works (next_page_params usage), and why [DISCOVERY] is flagged. The return structure compensates slightly for missing output schema but isn't sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, establishing baseline 4. No parameter description needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'get withdrawals' restates the tool name without specifying scope (e.g., all withdrawals, recent, or paginated). It fails to distinguish from siblings get_address_withdrawals and get_block_withdrawals. The [DISCOVERY] prefix is unexplained jargon.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus get_address_withdrawals or get_block_withdrawals. No mention of pagination requirements or filtering capabilities despite the return structure suggesting paginated results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkCInspect
If the requested service is unknown, the call will fail with status NOT_FOUND. Returns: { status: "UNKNOWN" | "SERVING" | "NOT_SERVING" | "SERVICE_UNKNOWN" }.
| Name | Required | Description | Default |
|---|---|---|---|
| service | No | query parameter: service (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It documents the return value enum (UNKNOWN, SERVING, etc.) and one error condition (fails with NOT_FOUND for unknown services), but lacks information on auth requirements, rate limits, or what 'health' encompasses in this context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences of moderate length. Not front-loaded (fails to state purpose first), but no wasted words. The structure prioritizes error handling over primary function, which is suboptimal for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema, the description partially compensates by documenting return values inline. However, it lacks a clear purpose statement and guidance on what constitutes a valid 'service' value, leaving gaps for agent comprehension.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description mentions 'requested service' which loosely maps to the 'service' parameter, but adds no additional semantic detail (e.g., expected format, valid service names, or that it is optional) beyond the schema's 'query parameter: service (string)'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description implies the tool checks service status through references to 'requested service' and return states like 'SERVING', but never explicitly states the core purpose (e.g., 'Check the health status of a service'). It relies heavily on the tool name 'health_check' to convey intent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus the many sibling data-retrieval tools (get_block, get_tx, etc.). The description mentions a failure condition for unknown services but doesn't explain when health checks should be performed or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
refetch_token_instance_metadataCInspect
[DISCOVERY] re-fetch token instance metadata Returns: { message: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | integer id | |
| address_hash | Yes | Address hash | |
| recaptcha_response | Yes | recaptcha_response (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden but offers minimal behavioral context. It mentions the return type ('{ message: string }') but fails to explain what 're-fetching' entails (e.g., updating a cache, triggering external API calls to IPFS/metadata servers), why recaptcha is required, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and front-loaded with the core action. However, the bracketed '[DISCOVERY]' tag and appended return type create a slightly awkward structure that deviates from standard prose descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool likely performing a mutation (refreshing/updating metadata records) with no annotations and no formal output schema, the description is insufficient. It lacks critical context about side effects, the external dependencies involved in 're-fetching', or the implications of the recaptcha-gated access.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meeting the baseline. The description adds no additional parameter context (e.g., explaining that 'id' refers to the token instance ID, 'address_hash' is the contract address, or why recaptcha authentication is necessary), but the schema adequately documents basic types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action ('re-fetch') and resource ('token instance metadata'), distinguishing it from sibling 'get' tools. However, the '[DISCOVERY]' prefix is opaque without additional context, and it doesn't clarify what distinguishes a 'token instance' from other token types in this API.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like 'get_nft_instance' or 'get_token'. While 're-fetch' implies refreshing stale data, it doesn't explicitly state the trigger condition (e.g., 'use when metadata appears outdated') or warn against unnecessary use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchCInspect
[DISCOVERY] search Returns: { items: string[], next_page_params: object }.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | query parameter: q (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and partially succeeds by disclosing the return structure ({ items: string[], next_page_params: object }), indicating pagination support. However, it omits other behavioral traits like authentication requirements, rate limits, or what the string items represent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief (one line), but the '[DISCOVERY]' prefix appears to be metadata or a category tag rather than descriptive prose, making the structure somewhat cryptic. While no words are wasted, the formatting reduces clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one optional parameter and no output schema, the description partially suffices by mentioning returns, but remains incomplete. It fails to explain what entities are searchable, how to interpret the string items, or how pagination parameters function, which are critical gaps for a discovery/search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description does not add meaning to the 'q' parameter beyond the schema's 'query parameter: q (string)' documentation, nor does it provide query syntax examples or constraints beyond the single example 'USDT'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description '[DISCOVERY] search' is tautological and fails to specify what resources are being searched (addresses, tokens, blocks, etc.) or the domain context. While the return structure is mentioned, the core purpose remains vague without specifying the scope of the search operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this generic search versus the numerous specific getters (get_address, get_token, get_block, etc.) or versus search_redirect. The description lacks any when-to-use or when-not-to-use criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_redirectCInspect
search redirect Returns: { parameter: string, redirect: boolean, type: string }.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | query parameter: q (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions return fields (parameter, redirect, type) but does not explain their semantics, possible values, or what conditions trigger redirect=true. Missing: side effects, rate limits, and error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a sentence fragment ('search redirect') awkwardly concatenated with return type information. The structure is poor—return values should ideally be documented in an output schema, not crammed into the description text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of a sibling 'search' tool and 30+ getter tools, the description fails to clarify this tool's specific role in the search ecosystem. With no annotations and no output schema, the description should explain the redirect logic and return value meanings, which it does not.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('query parameter: q'), so the baseline is 3. The description adds no additional parameter context (e.g., expected query formats, examples of valid inputs, or whether 'q' supports wildcards).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The phrase 'search redirect' is tautological, merely restating the tool name without explaining what resource is being searched or what constitutes a 'redirect' in this context. While it mentions the return structure, it fails to specify the tool's actual function (e.g., determining if a query matches a specific entity requiring redirection).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the sibling 'search' tool or other lookup tools. No mention of prerequisites, query format requirements, or decision criteria for selecting this over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!