Skip to main content
Glama

Server Details

Etherscan Api - 71 tools for data, metrics, and on-chain analytics

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
junct-bot/etherscan-api-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

71 tools
check_contract_execution_statusCInspect

Check Contract Execution Status — Returns the status code of a contract execution.

ParametersJSON Schema
NameRequiredDescriptionDefault
txhashYesquery parameter: txhash (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full disclosure burden. While it mentions returning a 'status code,' it fails to explain what values indicate success vs. failure in the Ethereum context (e.g., 0x0 = failed, 0x1 = success), nor does it indicate this is a read-only operation or describe error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence with em-dash structure is efficient and front-loaded with the action. However, given the lack of annotations and output schema, it is overly terse—every word earns its place, but the sentence lacks the substance needed for a zero-annotation tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description should explain the status code semantics and return structure. As written, it leaves critical gaps: users don't know what status codes mean, the data type returned, or success/failure indicators.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage describing txhash as a query parameter. The description adds no additional context (e.g., that it must be an Ethereum transaction hash, format requirements, or where to obtain it), warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks contract execution status and returns a status code, using specific verbs and identifying the Ethereum contract domain. It implicitly distinguishes from sibling check_source_code_verification_status (code vs. execution), though it could better clarify the relationship with check_transaction_receipt_status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives, prerequisites (e.g., needing a transaction hash from a prior transaction), or that it applies only to contract transactions versus simple transfers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_source_code_verification_statusBInspect

Check Source Code Verification Status — Returns the success or error status of a contract verification request.

ParametersJSON Schema
NameRequiredDescriptionDefault
guidYesquery parameter: guid (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool returns 'success or error status,' which is helpful behavioral context. However, it lacks details on rate limiting, state transitions (pending vs final), or whether this is safe to poll repeatedly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with em-dash structure is appropriately compact. Minor deduction for front-loading the tool name (slight redundancy), but the second clause efficiently communicates the return value without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter status tool without output schema, the description meets minimum viability by indicating what status information is returned. However, it should mention the relationship to verify_source_code and potentially advise on polling intervals given the async nature of blockchain verification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the guid parameter well-documented as a query parameter. The description implies the GUID relates to a 'contract verification request' but doesn't add syntax details, format constraints, or examples beyond what's in the schema. Baseline 3 is appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the specific domain (source code verification) and action (checking status), distinguishing it from siblings like check_contract_execution_status. However, it doesn't explicitly clarify that this checks the status of a previously submitted verification request (implied but not stated).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, nor does it mention the prerequisite of having submitted a verification request (via verify_source_code) to obtain the required GUID. No polling guidance provided despite this being an async status check.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_transaction_receipt_statusAInspect

Check Transaction Receipt Status — Returns the status code of a transaction execution. 📝 Note: Only applicable for post Byzantium Fork transactions.

ParametersJSON Schema
NameRequiredDescriptionDefault
txhashYesquery parameter: txhash (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It adds valuable context about the Byzantium Fork limitation and clarifies the return is a status code rather than full receipt data. However, it omits what the status codes represent (e.g., 0=failed, 1=success), error handling for invalid hashes, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences. The first front-loads the core purpose (check status/return status code), and the second provides the critical historical constraint. No redundant or filler text is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single-parameter simplicity and 100% schema coverage, the description adequately covers inputs. However, with no output schema provided, the description should ideally explain the status code semantics (possible values and meanings) to be complete, which is absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the txhash parameter documented as a query parameter. The description does not add semantic meaning beyond the schema (e.g., format requirements, expected transaction hash format), warranting the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it checks transaction receipt status and returns the status code of transaction execution. It implicitly distinguishes from sibling `eth_gettransactionreceipt` (which returns full receipt objects) by specifying 'status code,' but does not explicitly contrast with `check_contract_execution_status` or other status-checking tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides one important constraint: it is 'Only applicable for post Byzantium Fork transactions.' However, it lacks explicit guidance on when to use this tool versus `eth_gettransactionreceipt` or `check_contract_execution_status`, and does not mention prerequisites like transaction finality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_blocknumberBInspect

eth_blockNumber — Returns the number of most recent block

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Returns'), but lacks details about response format (hex string vs integer), whether this reflects the latest finalized or pending block, or latency/caching characteristics typical of blockchain node queries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at one sentence. The em-dash formatting with the raw method name 'eth_blockNumber' is slightly redundant given the tool's name, but the sentence structure is front-loaded and efficient with no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameter-less utility that returns a single scalar value, the description adequately covers the core function. While it could specify the return type or chain context, the current description is sufficient given the tool's simplicity and the absence of an output schema requiring explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters. According to the evaluation rubric, zero parameters warrants a baseline score of 4, as there are no parameter semantics to describe beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns the number of the most recent block, which distinguishes it from sibling block-fetching tools like `eth_getblockbynumber` (which returns full block data) and `get_block_number_by_timestamp` (which looks up historical blocks). However, it could explicitly clarify that this retrieves the current head of the chain versus historical lookups.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention that this is for the latest/current block only, or that users should use `get_block_number_by_timestamp` for past blocks or `eth_getblockbynumber` if they need full block data rather than just the number.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_callBInspect

eth_call — Executes a new message call immediately without creating a transaction on the block chain.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesquery parameter: to (string)
tagYesquery parameter: tag ("earliest" | "latest" | "pending")
dataYesquery parameter: data (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key behavioral traits: executes 'immediately' and 'without creating a transaction.' Given no annotations, description carries burden of explaining this is a local simulation that doesn't persist state. Missing: whether it consumes gas, error handling (reverts), return format (hex bytes), and that it requires no authentication.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no redundancy. Efficiently front-loads the core action. Slightly penalized because extreme brevity leaves gaps in behavioral and parameter documentation that could be addressed with one additional sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema, so description should indicate return values (returned data as hex string). Missing Ethereum-specific context (e.g., 'simulates a transaction locally using EVM'). Given 100% schema coverage and clear core purpose, it's minimally complete but has gaps for complex blockchain domain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% but descriptions are minimal/restating ('query parameter: to'). Tool description adds no semantic context for the three required params (e.g., that 'to' is the contract address, 'data' is ABI-encoded call data, 'tag' is block state). Baseline 3 appropriate since schema exists but adds little semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Executes') and resource ('message call') with specific scope ('without creating a transaction'). Distinguishes from state-changing siblings like eth_sendrawtransaction implicitly, though doesn't explicitly name alternatives or clarify this is specifically for smart contract interaction versus other read operations like eth_getcode.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit guidance by stating it runs 'without creating a transaction,' suggesting use for simulation or read-only operations. However, lacks explicit when-to-use guidance (e.g., 'use this to call contract view functions') and doesn't distinguish from sibling read methods or state-changing alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_estimategasBInspect

eth_estimateGas — Makes a call or transaction, which won't be added to the blockchain and returns the used gas.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesquery parameter: to (string)
gasYesquery parameter: gas (string)
dataYesquery parameter: data (string)
valueYesquery parameter: value (string)
gasPriceYesquery parameter: gasPrice (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the non-destructive nature ('won't be added') and return value ('used gas'), but omits behavioral details like failure modes (revert handling), gas limit behavior, or why gas/gasPrice are required parameters for an estimation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with the key safety information front-loaded. Minor deduction for redundant tool name prefix 'eth_estimateGas —' which restates the function name already visible to the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a standard Ethereum RPC method given no output schema exists, but minimal. With no annotations and no output schema, the description could strengthen contextual completeness by noting this is a simulation that requires valid transaction parameters to estimate accurately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (albeit with generic 'query parameter: X' descriptions), establishing a baseline of 3. The description adds no additional parameter semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates gas by making a call/transaction simulation and explicitly notes it 'won't be added to the blockchain,' distinguishing it from commit-based siblings like eth_sendrawtransaction. However, 'Makes a call or transaction' is slightly ambiguous—it could clarify 'Simulates' to emphasize it's a dry-run estimation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'won't be added to the blockchain' provides implicit safety guidance distinguishing it from state-committing tools, but lacks explicit when-to-use guidance (e.g., 'Use before eth_sendrawtransaction to preview costs') or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_gaspriceAInspect

eth_gasPrice — Returns the current price per gas in wei.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It specifies the return unit (wei) but omits critical behavioral details: whether this queries the node directly, potential caching behavior, that it returns a hex string vs decimal, or network latency considerations. It minimally covers the output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at one sentence and seven words. Every word earns its place: the method prefix provides API context, and the definition immediately states the return value and unit. No fluff or redundant information is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero inputs, single scalar output) and lack of output schema, the description adequately covers the essential return value (price in wei). However, it could benefit from noting this is an Ethereum JSON-RPC standard method or mentioning the typical hex string return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per evaluation rules, zero-parameter tools receive a baseline score of 4. The description appropriately does not invent parameters, though it could note that no inputs are required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Returns) and resource (current price per gas) with specific unit (wei). However, it does not explicitly distinguish from siblings like `eth_estimategas` (which estimates gas for specific transactions) or `get_gas_oracle` (which returns suggested fee rates), though 'current' implies real-time vs historical.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as `get_gas_oracle` (for recommended EIP-1559 fees) or `eth_estimategas` (for transaction-specific estimates). No prerequisites or contextual conditions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_getblockbynumberCInspect

eth_getBlockByNumber — Returns information about a block by block number.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag (string)
booleanYesquery parameter: boolean (boolean)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It fails to disclose what the 'boolean' parameter controls (inclusion of full transaction objects vs. just hashes), what 'tag' values are valid (e.g., 'latest', 'pending', hex), error behavior for non-existent blocks, or that this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundancy. However, it is under-specified rather than appropriately concise—the brevity leaves critical gaps rather than eliminating waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description should explain what fields are returned and how parameters affect output. It omits essential Ethereum RPC context: valid values for the 'tag' parameter and the significant behavioral difference the 'boolean' flag makes on return payload structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (baseline 3), though the schema descriptions are tautological ('query parameter: tag (string)'). The description adds no parameter semantics beyond the schema, missing the crucial context that 'tag' accepts block numbers or special strings and 'boolean' controls transaction detail inclusion.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic action (returns information) and resource (block) with the lookup key (block number), but 'information' is vague. It fails to distinguish from siblings like eth_getblocktransactioncountbynumber (returns count) or eth_getunclebyblocknumberandindex (returns uncle blocks) that also return 'information about a block by number'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus alternatives like eth_getblocktransactioncountbynumber or get_block_number_by_timestamp. No mention of prerequisites or the critical distinction that this returns full block data versus partial data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_getblocktransactioncountbynumberCInspect

eth_getBlockTransactionCountByNumber — Returns the number of transactions in a block.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description only states the return value conceptually ('number of transactions') without specifying the data format (hex string, integer), error behavior for non-existent blocks, or that this maps to the Ethereum JSON-RPC specification.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded, though it redundantly prefixes with the exact method name (eth_getBlockTransactionCountByNumber) before stating the actual definition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter Ethereum query tool, the description is insufficient. It omits the critical detail that 'tag' represents the block number or block identifier, and fails to describe the return value format despite the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (1/1 parameters described), establishing a baseline of 3. The description adds no information about the 'tag' parameter, which in Ethereum contexts accepts block numbers or strings like 'latest'/'earliest'—critical semantics left unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the count of transactions in a block, which matches the tool name. However, it could better distinguish from sibling `eth_getblockbynumber` (which returns full block data including the count) and clarify that 'by number' refers to the block identification method.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus retrieving the full block with `eth_getblockbynumber`, or what constitutes a valid block identifier. No mention of error conditions (e.g., invalid block numbers).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_getcodeCInspect

eth_getCode — Returns code at a given address.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag ("earliest" | "latest" | "pending")
addressYesquery parameter: address (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses only that it 'Returns code' without mentioning: read-only safety, what happens if the address is an EOA (externally owned account) versus a contract, return format (hex string), or rate limit considerations. Minimal behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the verb 'Returns'. Every word serves a purpose and there is no redundancy. However, extreme brevity leaves gaps in distinguishing from siblings and explaining behavioral nuances.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple two-parameter input schema and lack of output schema, the description covers the basic operation. However, given the presence of source-code-related siblings, it should clarify that this returns hexadecimal bytecode, not human-readable source code. Adequate but with a notable contextual gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with both 'address' and 'tag' parameters fully documented in the schema (including examples for address and enum-like values for tag). The description adds no supplemental param semantics, but none are needed given the comprehensive schema documentation. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the specific action (Returns code) and resource (given address) using standard Ethereum RPC terminology. However, it does not specify that 'code' refers to EVM bytecode/runtime bytecode, which would help distinguish it from sibling tools dealing with source code verification (e.g., get_contract_source_code_for_verified_contract_source_codes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives. Given siblings like eth_getStorageAt (for storage) and get_contract_source_code_for_verified_contract_source_codes (for human-readable code), the description should clarify this retrieves deployed bytecode at a specific block state (tag).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_getstorageatBInspect

eth_getStorageAt — Returns the value from a storage position at a given address. This endpoint is still experimental and may have potential issues

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag ("earliest" | "latest" | "pending")
addressYesquery parameter: address (string)
positionYesquery parameter: position (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description discloses the experimental status—critical operational context. However, it omits return format (hex string), error behavior for non-contract addresses, rate limit implications, or whether 'position' expects padded hex. Carries moderate burden but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with clear front-loading: function first, warning second. No wasted words. The em-dash is slightly unusual but doesn't impede readability. Could benefit from one sentence on return format without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists and no annotations cover read-only safety. The description confirms it's a read operation ('Returns') and warns about experimental status, but lacks return value specification, pagination info (not applicable here), or error case handling for the three required parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100%, the schema descriptions are minimal ('query parameter: position (string)'). The description adds crucial semantic context by specifying 'storage position' (implies contract storage slot), distinguishing it from transaction position or memory position. Also implies 'address' is an Ethereum contract address.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns the value from a storage position at a given address'—a specific read operation on Ethereum contract storage slots. It distinguishes from siblings like eth_getcode (bytecode) and eth_call (execution) by specifying 'storage position', though it could explicitly mention this targets smart contract state.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Only provides an experimental status warning ('may have potential issues'). No guidance on when to prefer this over eth_call for reading state, no mention of block tag implications ('latest' vs 'earliest'), and no prerequisites (like address formatting requirements).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_gettransactionbyblocknumberandindexCInspect

eth_getTransactionByBlockNumberAndIndex — Returns information about a transaction by block number and transaction index position.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag (string)
indexYesquery parameter: index (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only states it 'Returns information' without detailing what specific transaction fields are returned, whether the operation is read-only (safe), or what happens when the block/index is invalid (null result vs error).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with no redundant explanations. However, it leads with the method name (which is already the tool name), making the first clause somewhat repetitive rather than front-loading the action description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is insufficient. It should describe the return structure (transaction object with hash, value, gas, etc.) and mention error conditions (e.g., invalid block number or index out of range returns null).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, setting a baseline of 3. While the description does not add semantic clarification beyond the schema, the schema technically contains descriptions for both parameters. The description missed an opportunity to clarify that 'tag' accepts block numbers or keywords like 'latest', and 'index' is the transaction's position in the block.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns transaction information using the specific lookup method (block number + index). However, it does not explicitly distinguish from similar siblings like `eth_gettransactionbyhash` (lookup by hash) or `eth_getunclebyblocknumberandindex` (lookup for uncle blocks), which could cause selection confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus retrieving transactions by hash (`eth_gettransactionbyhash`) or retrieving receipts (`eth_gettransactionreceipt`). No mention of valid block tag formats (hex, 'latest', 'pending') for the tag parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_gettransactionbyhashCInspect

eth_getTransactionByHash — Returns the information about a transaction requested by transaction hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
txhashYesquery parameter: txhash (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It fails to indicate this is read-only, does not describe the return structure (full transaction object with fields like nonce, gasPrice, value), and omits error behavior (e.g., null return for unknown hash).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence format with method name prefix is standard for Ethereum RPC documentation. No redundant words or phrases; every element conveys specific scope (transaction object, not receipt, identified by hash).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet description fails to specify what 'information' includes (e.g., sender, recipient, value, gas, signature). Critical gap given sibling `eth_gettransactionreceipt` returns different data for the same input, and users need to know which tool returns the actual transaction payload versus execution results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with adequate parameter documentation. Description adds minimal semantic value beyond the schema's 'query parameter: txhash' text, merely restating that the hash is used to request the transaction. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns transaction information and specifies lookup by hash, distinguishing it from block-index lookups. However, it fails to differentiate from sibling tool `eth_gettransactionreceipt` which also takes a transaction hash but returns different data (receipt vs transaction object).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus `eth_gettransactionreceipt` or other transaction-fetching siblings. No mention of prerequisites like needing the transaction hash from a previous call, or what to do if the transaction is pending.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_gettransactioncountCInspect

eth_getTransactionCount — Returns the number of transactions performed by an address.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag ("earliest" | "latest" | "pending")
addressYesquery parameter: address (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full disclosure burden but provides minimal behavioral context. It does not explain that this returns the nonce value (critical for transaction signing), what 'earliest'/'latest'/'pending' mean for an account's historical state, or whether failed transactions are included in the count.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is appropriately brief, but wastes leading space repeating the method name 'eth_getTransactionCount —' which duplicates the tool name field. The structure is front-loaded but the em-dash formatting is unnecessary noise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an Ethereum RPC method, the description omits critical context: that this returns the transaction nonce (next sequence index), that it only counts outgoing transactions, and how the tag parameter affects historical state queries. Without output schema, the return type/value format is also undocumented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions including examples for address and enum values for tag. The description adds no additional parameter semantics beyond keyword repetition ('address'), meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic operation (returns transaction count for an address) but fails to distinguish from sibling `eth_getblocktransactioncountbynumber` (block-level vs address-level) or clarify whether 'performed' means sent, received, or both. In Ethereum context, this ambiguity is significant as the method actually returns the nonce (outgoing count only).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus `get_a_list_of_normal_transactions_by_address` (which retrieves full transaction data) or `eth_getblocktransactioncountbynumber`. No mention of prerequisites like address format or when 'pending' tag is appropriate versus 'latest'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_gettransactionreceiptCInspect

eth_getTransactionReceipt — Returns the receipt of a transaction by transaction hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
txhashYesquery parameter: txhash (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to mention that receipts only exist after block inclusion, that the method returns null for pending/unmined transactions, or what data the receipt contains (gasUsed, logs, status code). No error conditions or rate limiting context is provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy or filler. However, the leading method name 'eth_getTransactionReceipt' before the em-dash consumes space without adding descriptive value, and the extreme brevity sacrifices necessary context for an Ethereum RPC method.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without output schema or annotations, the description should explain the return value structure (receipt object fields) and lifecycle behavior (null for pending). For a blockchain query tool with nuanced state-dependent behavior, this is materially incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the single txhash parameter, including a valid example format. The description adds no additional semantic context about the parameter (e.g., that it must be a 66-character hex string with 0x prefix), but baseline 3 is appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic function (returns receipt) and lookup method (by transaction hash), but is essentially a restatement of the tool name with minimal elaboration. It fails to distinguish from sibling tools like eth_gettransactionbyhash (which returns the transaction object, not the receipt) or clarify what distinguishes a 'receipt' from a 'transaction' in Ethereum semantics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like eth_gettransactionbyhash or check_transaction_receipt_status. No mention of prerequisites (e.g., transaction must be mined) or that it returns null for pending transactions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_getunclebyblocknumberandindexCInspect

eth_getUncleByBlockNumberAndIndex — Returns information about a uncle by block number.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag (string)
indexYesquery parameter: index (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but discloses almost nothing about behavior. It doesn't specify the return format, what fields are included in 'information about a uncle', error conditions (e.g., invalid block), or whether this requires specific node permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is not verbose, though the leading em-dash and method name repetition (already in tool name) is slightly redundant. The critical missing information is due to under-specification rather than structural bloat.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a blockchain tool with no output schema. The description fails to explain Ethereum-specific terminology ('uncle'), doesn't describe return values, and doesn't clarify parameter semantics despite the complex domain involving block numbers and indices.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (per context signals), establishing a baseline of 3. While the description mentions 'block number' (hinting at the 'tag' parameter's purpose), it doesn't explain Ethereum block tags ('latest', 'pending', etc.) or clarify that 'index' refers to the uncle's index position within the block.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies the resource ('uncle') and action ('Returns information'), but creates confusion by stating 'by block number' while the actual parameters are 'tag' and 'index'. It fails to explain that 'tag' represents the block number or that 'index' is the uncle's position within the block.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like eth_getblockbynumber or eth_gettransactionbyblocknumberandindex. No mention of Ethereum-specific contexts where uncles (orphaned blocks) are relevant.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_sendrawtransactionAInspect

eth_sendRawTransaction — Submits a pre-signed transaction for broadcast to the Ethereum network.

ParametersJSON Schema
NameRequiredDescriptionDefault
hexYesquery parameter: hex (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While it mentions 'broadcast', it fails to disclose critical behavioral traits: that this is an irreversible, destructive operation consuming gas/ETH, that it submits to the mempool (not guaranteeing finality), or what the return value is (transaction hash). For a financial transaction tool, this is a significant safety gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, efficient sentence with no redundancy. The leading method name 'eth_sendRawTransaction' serves as standard Ethereum JSON-RPC identification (not tautology given the tool name is lowercase), immediately followed by the action verb. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high complexity (irreversible blockchain transactions) and zero annotations/output schema, the description is minimally viable. It identifies the standard Ethereum method adequately but lacks safety warnings (gas costs, irreversibility) or return value documentation that would be necessary for an AI agent to use this tool responsibly without external knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds value by contextualizing the 'hex' parameter as containing a 'pre-signed transaction', giving semantic meaning beyond the schema's generic 'query parameter: hex (string)' label. It could further clarify the expected format (RLP-encoded signed transaction), but the core meaning is conveyed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Submits') and resource ('pre-signed transaction') with clear scope ('Ethereum network'). It distinguishes from read-only siblings (like eth_call) by emphasizing 'broadcast' and 'submits', and specifies 'pre-signed' to indicate it requires a signed transaction blob rather than constructing one.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a prerequisite by specifying 'pre-signed transaction', hinting that signing must occur externally first. However, it lacks explicit guidance on when NOT to use this (e.g., for read-only operations) or alternatives for unsigned transactions, though no signing tools exist in the sibling list to contrast against.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_address_erc20_token_holdingAInspect

Get Address ERC20 Token Holding — Returns the ERC-20 tokens and amount held by an address. Note : This endpoint is throttled to 2 calls/second regardless of API Pro tier.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the rate limit constraint. It could additionally clarify that this is a read-only operation with no side effects, but the rate limiting detail is valuable behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with minimal waste. The first clause 'Get Address ERC20 Token Holding' slightly echoes the tool name, but the em-dash transition to the return value keeps it efficient. The rate limit note is appropriately placed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a data retrieval tool with no output schema, the description adequately covers the return value concept (tokens and amounts). However, given the pagination parameters exist, mentioning that results are paginated would improve completeness. No critical gaps for a simple read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description does not add semantic meaning beyond the schema (e.g., it doesn't explain that page/offset are for pagination, or address format requirements), but the schema's coverage means the description isn't required to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool returns: 'ERC-20 tokens and amount held by an address'. This distinguishes it from siblings like get_address_erc721_token_holding (different token standard) and get_a_list_of_erc20_token_transfer_events_by_address (transaction history vs. current holdings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes rate limiting guidance ('throttled to 2 calls/second'), which affects usage patterns. However, it lacks explicit guidance on when to use this versus alternatives like get_erc20_token_account_balance_for_tokencontractaddress (which queries a specific token contract) or pagination guidance despite having page/offset parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_address_erc721_token_holdingBInspect

Get Address ERC721 Token Holding — Returns the ERC-721 tokens and amount held by an address. Note : This endpoint is throttled to 2 calls/second regardless of API Pro tier.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full disclosure burden. It successfully adds the rate limiting note ('throttled to 2 calls/second'), which is critical behavioral context. However, it lacks other essential behavioral details: pagination mechanics (total count availability, max offset limits), response structure, or error handling for invalid addresses.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-sentence structure. First sentence establishes purpose immediately; second sentence provides critical operational constraint (throttling). No filler text or redundant explanations. Front-loaded with the most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter input structure (flat, no nesting) and lack of output schema, the description adequately explains the high-level return value ('ERC-721 tokens and amount'). However, it should ideally describe the return data structure (e.g., list of contract addresses with token IDs) since no output schema exists to document this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description doesn't add semantic clarifications beyond the schema (e.g., explaining that 'offset' controls results per page, or that 'page' starts at 1). The schema's mechanical descriptions ('query parameter: page (number)') suffice but aren't enriched by the description text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Returns the ERC-721 tokens and amount held by an address' with a specific verb and resource type. However, it doesn't explicitly distinguish from sibling tool 'get_address_erc721_token_inventory_by_contract_address' or clarify whether this queries across all token contracts or requires specific contract input.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus alternatives like 'get_address_erc721_token_inventory_by_contract_address' or 'get_a_list_of_erc721_token_transfer_events_by_address'. No prerequisites (e.g., address format requirements) or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_address_erc721_token_inventory_by_contract_addressAInspect

Get Address ERC721 Token Inventory By Contract Address — Returns the ERC-721 token inventory of an address, filtered by contract address. 📝 Note : This endpoint is throttled to 2 calls/second regardless of API Pro tier.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
contractaddressYesquery parameter: contractaddress (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full behavioral disclosure burden. It successfully notes the rate limiting constraint ('throttled to 2 calls/second regardless of API Pro tier'), but fails to disclose other critical behavioral traits such as whether the operation is read-only, idempotent, or what error conditions to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized at two sentences. The first clause redundantly restates the tool name in title case before the em-dash, which slightly wastes space, but the second sentence delivers the functional definition efficiently with the throttling note appended clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should ideally describe what the 'inventory' return value contains (e.g., token IDs, contract metadata, balances). While it mentions pagination parameters exist in the schema, it does not explain the pagination contract or return structure, leaving a gap in contextual completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 100% description coverage (baseline 3), the description adds valuable semantic context by clarifying the relationship between 'address' and 'contractaddress' parameters — specifically that the latter acts as a filter on the former's inventory. This helps the agent understand the query intent beyond raw parameter types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns the ERC-721 token inventory of an address, filtered by contract address' — a specific verb (returns), resource (ERC-721 token inventory), and scope (filtered by contract address). This effectively distinguishes it from siblings like 'get_address_erc721_token_holding' which presumably lacks contract filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the phrase 'filtered by contract address,' suggesting when to use this (querying specific NFT collections) versus general holding queries. However, it lacks explicit guidance contrasting with 'get_address_erc721_token_holding' or other sibling transfer/event tools regarding when each is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_a_list_of_erc1155_token_transfer_events_by_addressBInspect

Get a list of 'ERC1155 - Token Transfer Events' by Address — Returns the list of ERC-1155 ( Multi Token Standard ) tokens transferred by an address, with optional filtering by token contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
sortNoquery parameter: sort ("asc" | "desc")
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
endblockYesquery parameter: endblock (number)
startblockYesquery parameter: startblock (number)
contractaddressYesquery parameter: contractaddress (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only discloses that filtering by token contract is optional. It fails to mention pagination behavior (despite page/offset parameters), whether results include both incoming and outgoing transfers, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence but suffers from redundancy ('Get... Returns'). The quoted title 'ERC1155 - Token Transfer Events' wastes space without adding meaning. It is front-loaded with the action but could be more efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 7 parameters, no output schema, and no annotations, the description inadequately describes the return value (only mentioning 'list of... tokens transferred' without specifying fields like transaction hash, token ID, value, or block timestamp). For a blockchain data retrieval tool of this complexity, the behavioral contract is under-specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema, merely noting that contractaddress provides 'optional filtering'—information already evident in the parameter descriptions. No additional context provided for block range semantics or pagination patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get', 'Returns') and clearly identifies the resource (ERC-1155 Token Transfer Events) and entity (address). It explicitly names the 'Multi Token Standard' which distinguishes this tool from siblings like get_a_list_of_erc20_token_transfer_events_by_address and get_a_list_of_erc721_token_transfer_events_by_address.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description identifies the specific token standard (ERC-1155) which implicitly guides selection over ERC20/721 siblings, but provides no explicit when-to-use guidance, prerequisites (e.g., address format), or contrasts with alternatives like normal transactions or internal transactions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_a_list_of_erc20_token_transfer_events_by_addressCInspect

Get a list of 'ERC20 - Token Transfer Events' by Address — Returns the list of ERC-20 tokens transferred by an address, with optional filtering by token contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
sortNoquery parameter: sort ("asc" | "desc")
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
endblockYesquery parameter: endblock (number)
startblockYesquery parameter: startblock (number)
contractaddressYesquery parameter: contractaddress (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but discloses minimal behavioral traits. It mentions the optional nature of contract filtering but omits critical operational details: pagination behavior (despite page/offset params), whether results include both incoming/outgoing transfers, read-only safety, or block range constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The two-clause structure ('Get... — Returns...') is appropriately concise and front-loaded with the action. Minor redundancy exists between 'Get' and 'Returns,' but no extraneous information is present. Suitable length for the complexity level.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, no output schema, and no annotations, the description is insufficient. It fails to explain the pagination pattern (page vs offset), the meaning of block ranges (startblock/endblock), or what fields are returned in the transfer events. Critical gaps remain for proper invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (mechanical descriptions present), establishing a baseline of 3. The description adds that contractaddress represents 'optional filtering,' though this contradicts the schema where contractaddress is listed as required. It clarifies that 'address' refers to the target address for transfer lookup.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (ERC-20 token transfer events) and action (Get/Returns list). It effectively distinguishes from siblings by specifying 'ERC20' and mentioning the contract filtering capability, differentiating it from ERC721/ERC1155 transfer tools and normal transaction lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it mentions 'optional filtering by token contract' implying when to use that parameter, it fails to explicitly contrast with the numerous sibling tools (e.g., get_a_list_of_erc721_token_transfer_events_by_address, get_a_list_of_normal_transactions_by_address) to guide selection. No when-not-to-use or prerequisites are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_a_list_of_erc721_token_transfer_events_by_addressAInspect

Get a list of 'ERC721 - Token Transfer Events' by Address — Returns the list of ERC-721 ( NFT ) tokens transferred by an address, with optional filtering by token contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
sortNoquery parameter: sort ("asc" | "desc")
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
endblockYesquery parameter: endblock (number)
startblockYesquery parameter: startblock (number)
contractaddressYesquery parameter: contractaddress (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the optional filtering capability but omits critical details like pagination limits, whether the results include both incoming and outgoing transfers, rate limiting, or data freshness. It only covers the basic read-only nature implicitly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief with two clauses separated by an em-dash. It front-loads the action ('Get') and resource. The quoted category name 'ERC721 - Token Transfer Events' is slightly redundant but does not significantly detract from the efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, 4 required fields, and no output schema or annotations, the description meets minimum viability by identifying the resource type and optional filtering. However, it lacks mention of what data points are returned (e.g., token IDs, transaction hashes, timestamps) or pagination behavior, which would be valuable for a list-returning tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds marginal value by clarifying that the contractaddress parameter provides 'optional filtering,' which hints at the query behavior, but does not elaborate on the pagination parameters (page, offset) or block range semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves 'ERC721 - Token Transfer Events' and explicitly mentions 'ERC-721 (NFT) tokens', distinguishing it from the sibling ERC20 and ERC1155 transfer event tools. It specifies the resource (token transfers) and the entity (address).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it mentions 'optional filtering by token contract' (guiding the use of the contractaddress parameter), it fails to distinguish when to use this versus similar siblings like 'get_address_erc721_token_holding' or 'get_address_erc721_token_inventory_by_contract_address'. No explicit when-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_a_list_of_internal_transactions_by_addressAInspect

Get a list of 'Internal' Transactions by Address — Returns the list of internal transactions performed by an address, with optional pagination. 📝 Note : This API endpoint returns a maximum of 10000 records only.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
sortNoquery parameter: sort ("asc" | "desc")
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
endblockYesquery parameter: endblock (number)
startblockYesquery parameter: startblock (number)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses critical behavioral traits: optional pagination support and the hard 10,000 record maximum. However, it lacks information on rate limits, authentication requirements, or what constitutes an 'internal' transaction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description contains redundancy between the first clause ('Get a list...') and second clause ('Returns the list...'). While the structure separates the main function from the limitation note, the opening tautology wastes space that could clarify usage or parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 6-parameter blockchain query tool with no output schema, the description adequately covers the primary limitation (10k record cap) but leaves gaps regarding the distinction between internal and normal transactions, expected response format, and guidance for traversing large result sets.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description mentions 'by an address' and implies block range filtering, but does not add syntax details or semantic meaning beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'/'Returns') and resource ('Internal Transactions'), and explicitly specifies 'by Address' which distinguishes it from sibling tools like get_internal_transactions_by_block_range and get_a_list_of_normal_transactions_by_address.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the pagination note and 10000-record limit, but does not explicitly state when to use this tool versus alternatives like get_internal_transactions_by_transaction_hash or provide guidance on handling the record limit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_a_list_of_normal_transactions_by_addressAInspect

Get a list of 'Normal' Transactions By Address — Returns the list of transactions performed by an address, with optional pagination. ​​ ​ 📝 Note : This API endpoint returns a maximum of 10000 records only.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
sortNoquery parameter: sort ("asc" | "desc")
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
endblockYesquery parameter: endblock (number)
startblockYesquery parameter: startblock (number)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds critical behavioral constraint: 'maximum of 10000 records only' and mentions pagination behavior. However, lacks safety info (read-only status), rate limits, or error handling details expected for unannotated tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Appropriately front-loaded with purpose followed by behavioral note. Two main sentences plus note format is efficient. Minor deduction for emoji and spacing artifacts ( ) that don't add value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet description only states 'Returns the list of transactions' without detailing transaction fields or structure. Adequate for basic invocation but insufficient for understanding return value utilization given the complexity (6 parameters, blockchain context).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds semantic grouping by mentioning 'optional pagination' which contextualizes page/offset/sort parameters, but does not elaborate on address format, block number semantics, or sort values beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' + resource 'Normal Transactions' + scope 'By Address'. The quotes around 'Normal' effectively distinguish this from sibling tools like get_a_list_of_internal_transactions_by_address and token transfer event tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through 'Normal' distinction and mentions optional pagination, but lacks explicit 'when to use' guidance or named alternatives. Does not clarify when to prefer this over get_a_list_of_internal_transactions_by_address or other transaction listing tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_beacon_chain_withdrawals_by_address_and_block_rangeCInspect

Get Beacon Chain Withdrawals by Address and Block Range — Returns the beacon chain withdrawals made to an address.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
sortNoquery parameter: sort ("asc" | "desc")
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
endblockYesquery parameter: endblock (number)
startblockYesquery parameter: startblock (number)
contractaddressYesquery parameter: contractaddress (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to deliver. It does not mention pagination behavior (despite page/offset parameters), rate limits, maximum block ranges, or the data structure of returned withdrawals. The only behavioral trait disclosed is the read-only nature implied by 'Get'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. However, it prioritizes brevity over comprehensiveness, leaving significant gaps in parameter explanation and behavioral context that could have been addressed with additional structured sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 7 parameters (including pagination), no output schema, and no annotations, a one-sentence description is insufficient. The description fails to explain the required 'contractaddress' parameter's role in beacon chain withdrawals, pagination mechanics, or return value structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage, the descriptions are boilerplate ('query parameter: X'). The description adds domain context by identifying this as 'beacon chain' data, but does not explain the semantic relationship between 'address' and 'contractaddress' (both required), nor does it clarify pagination logic or sort options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves beacon chain withdrawals using specific filters (address and block range), and the 'beacon chain' specificity distinguishes it from sibling transaction tools. However, it omits the unusual 'contractaddress' requirement present in the schema, which creates a minor gap in clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_a_list_of_normal_transactions_by_address' or 'get_internal_transactions_by_block_range'. It does not mention prerequisites, data availability limits, or appropriate use cases for beacon chain data specifically.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_block_and_uncle_rewards_by_blocknoCInspect

Get Block And Uncle Rewards by BlockNo — Returns the block reward and 'Uncle' block rewards.

ParametersJSON Schema
NameRequiredDescriptionDefault
blocknoYesquery parameter: blockno (number)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full behavioral disclosure burden. It states what data is returned but omits critical behavioral context: whether this is read-only (presumed), response format/structure, rate limits, or an explanation of what 'Uncle' blocks represent in the Ethereum context beyond putting the term in quotes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief (single sentence with em-dash). While the first clause largely restates the function name and the second repeats 'block rewards,' it efficiently conveys the essential purpose without extraneous text. Every word serves a purpose, even if slightly redundant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool without an output schema, the description minimally covers the return value concept (block and uncle rewards). However, given the domain complexity (Ethereum mining rewards, uncle block mechanisms), it lacks explanation of return value structure, units (wei/ether), or whether multiple uncle rewards are returned as an array.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The schema describes 'blockno' as a number. The description does not add semantic context such as valid ranges, formatting (decimal vs. hex), or the concept of block numbers in the blockchain. It relies entirely on the schema's tautological description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action (Get/Returns) and resource (Block and Uncle Rewards) with the specific identifier (BlockNo). It distinguishes from daily aggregate siblings by specifying 'by BlockNo' and signals this is about rewards rather than general block data. However, it lacks explicit differentiation from similar reward-related tools like 'get_daily_uncle_block_count_and_rewards'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as 'get_daily_block_rewards' (for historical aggregates) or 'eth_getblockbynumber' (for general block data). No prerequisites or constraints mentioned (e.g., valid block number ranges).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_block_number_by_timestampCInspect

Get Block Number by Timestamp — Returns the block number that was mined at a certain timestamp.

ParametersJSON Schema
NameRequiredDescriptionDefault
closestYesquery parameter: closest ("before" | "after")
timestampYesquery parameter: timestamp (number)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. It fails to clarify that this performs approximate matching (using the 'closest' parameter) rather than exact timestamp matching, which the phrase 'mined at a certain timestamp' misleadingly implies. It also does not state this is a read-only query operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with an em-dash separating the title clause from the definition. While the first clause repeats the tool name, the second sentence delivers the core purpose without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only two parameters with complete schema documentation and no output schema, the description is minimally adequate. However, it should explain the approximate matching behavior (nearest block lookup) given that blockchain timestamps are not continuous.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds no additional semantics beyond the schema regarding why one would choose 'before' versus 'after' for the closest parameter, or that this parameter enables nearest-block lookup when exact timestamps don't exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Returns the block number that was mined at a certain timestamp,' specifying the verb (returns) and resource (block number). However, it does not distinguish from sibling `eth_getblockbynumber`, which is the obvious alternative for block retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus `eth_getblockbynumber` or other block retrieval tools. The critical semantic difference—converting timestamps to block numbers versus retrieving by block number—is not explained.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_contract_abi_for_verified_contract_source_codesCInspect

Get Contract ABI for Verified Contract Source Codes — Returns the Contract Application Binary Interface ( ABI ) of a verified smart contract. Find verified contracts ✅on our Verified Contracts Source Code page.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesquery parameter: address (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the 'verified' constraint, which is crucial, but omits whether the operation is read-only, what happens if the address is not a contract or unverified, rate limits, or the return format (JSON ABI vs string).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences and reasonably compact. However, the emoji (✅) is unprofessional and unnecessary, and the first clause before the em-dash redundantly restates the tool name rather than adding value. Every sentence earns its place, but the emoji subtracts from professionalism.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with full schema coverage, the description is minimally adequate. However, it lacks error handling context (what happens if the contract is unverified or not found?) and does not describe the output format despite having no output schema. The 'verified' constraint is the only completeness factor addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the 'address' parameter has a description in the schema), establishing a baseline of 3. The description does not add semantic context beyond the schema (e.g., that the address must be a contract address, not an EOA, or formatting requirements), but the schema adequately covers the single parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Returns the Contract Application Binary Interface (ABI) of a verified smart contract,' providing specific verb and resource. It distinguishes from the sibling 'get_contract_source_code_for_verified_contract_source_codes' by specifying 'ABI' vs source code, and limits scope to 'verified' contracts. However, the leading clause before the em-dash merely restates the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance—only noting that verified contracts can be found on a specific page. It fails to specify when to use this tool versus siblings like 'get_contract_source_code_for_verified_contract_source_codes' or 'eth_getcode', or what prerequisites exist (e.g., the contract must be verified on the platform).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_contract_creator_and_creation_tx_hashBInspect

Get Contract Creator and Creation Tx Hash — Returns a contract's deployer address and transaction hash it was created, up to 5 at a time.

ParametersJSON Schema
NameRequiredDescriptionDefault
contractaddressesNoquery parameter: contractaddresses (string[])
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the 5-item batch limit and the return values (deployer address and tx hash), but omits read-only status, error handling behavior, or cache characteristics that would be essential for a mutation-safe or reliability assessment.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence (or compound clause) with minimal redundancy. The leading phrase repeats the tool name slightly, but the em-dash structure effectively separates the action from the return value description without wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description partially compensates by stating it returns the 'deployer address and transaction hash.' However, it does not describe the return structure (object vs array), field names, or behavior when a contract address is invalid or not found, leaving gaps for a tool with zero annotation coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (contractaddresses is documented as a string array), establishing a baseline of 3. The description adds the 'up to 5 at a time' constraint which provides semantic context for the array length, but does not elaborate on address format validation or the relationship between input order and output order.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves a contract's deployer address and creation transaction hash, and specifies the 'up to 5 at a time' batch limit. However, it does not explicitly differentiate from siblings like get_contract_source_code_for_verified_contract_source_codes which handles different contract metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides the constraint 'up to 5 at a time,' implying a batch limit that guides usage. However, it lacks explicit guidance on when to use this versus alternatives (e.g., whether to use this for verified vs unverified contracts) or what to do if more than 5 addresses need processing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_contract_source_code_for_verified_contract_source_codesBInspect

Get Contract Source Code for Verified Contract Source Codes — Returns the Solidity source code of a verified smart contract. 📩 Tip : You can also download a CSV list of verified contracts addresses of which the code publishers have provided a corresponding Open Source license for redistribution.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesquery parameter: address (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It establishes that the contract must be 'verified' (important behavioral constraint), but omits critical details like error handling when unverified, return format (JSON vs raw text), or whether multiple source files are returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The structure front-loads the tautological title restatement. The CSV tip, while useful, is tangential to the core function of retrieving source code for a specific address, slightly diluting the focus.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool, the description covers the basic purpose but lacks essential context given the absence of an output schema. It does not describe the return structure, pagination (if applicable), or error states when the address is invalid or unverified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description implies the address parameter should be a verified contract address, but adds no syntax details, validation rules, or format requirements beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Returns the Solidity source code of a verified smart contract,' distinguishing it from siblings like get_contract_abi_for_verified_contract_source_codes (which returns ABI) and verify_source_code (which submits code). However, the opening phrase merely restates the tool name in title case before the em-dash.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like verify_source_code or check_source_code_verification_status. The CSV download tip mentions a related feature but doesn't clarify tool selection criteria or prerequisites (e.g., that the contract must already be verified).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_average_block_sizeBInspect

Get Daily Average Block Size — Returns the daily average block size within a date range. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It notes the PRO endpoint requirement (indicating access restrictions), but fails to disclose whether the operation is read-only (implied by 'Get' but not confirmed), return data structure, rate limits, or data availability constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief with three informative clauses. It is front-loaded with the action, and the 'PRO endpoint' tag provides value. Minor deduction for the first clause partially restating the tool name ('Get Daily Average Block Size').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (3 flat parameters with 100% coverage) and lack of output schema, the description covers the basic operation scope and access tier. However, it omits return value description, pagination behavior, or clarification on how 'daily average' is calculated, leaving gaps for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'date range' generally, which maps to the startdate and enddate parameters, but adds no semantic details about date formats, valid ranges, or the purpose of the sort parameter beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves 'daily average block size' within a date range, using specific verbs and resource identification. It distinguishes itself from sibling tools like get_daily_average_gas_price by specifying the 'block size' metric, though it does not explicitly contrast with other daily average endpoints.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description identifies this as a 'PRO endpoint,' indicating authentication or tier requirements. However, it lacks explicit guidance on when to use this tool versus alternatives like get_daily_block_count_and_rewards or other daily average metrics, and does not mention prerequisites beyond the PRO status.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_average_gas_limitAInspect

Get Daily Average Gas Limit — Returns the historical daily average gas limit of the Ethereum network. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentioning 'PRO endpoint' adds necessary behavioral context about access restrictions. However, lacks disclosure on rate limits, pagination behavior, or definitive read-only status confirmation beyond the implicit 'Returns'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient structure using em-dash to combine title-like phrase with function description, plus 'PRO endpoint' tag. No redundancy or filler text; every word earns its place in conveying purpose and access tier.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for the tool's complexity: mentions return value (historical daily average gas limit), notes PRO constraint, and relies on complete input schema. Could improve by addressing potential confusion with gas price sibling or describing output structure, but sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with mechanical but complete parameter descriptions (startdate, enddate, sort). The description adds no additional parameter semantics (e.g., date format details, sort behavior), which aligns with baseline 3 when the schema is fully self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

'Returns the historical daily average gas limit of the Ethereum network' provides specific verb and resource. Differentiates from siblings by explicitly naming 'gas limit' (distinct from sibling 'get_daily_average_gas_price'), though it could explicitly warn about the gas limit vs gas price distinction for clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Notes the 'PRO endpoint' requirement, indicating authentication/subscription constraints necessary for invocation. However, lacks explicit guidance on when to use this versus the similar 'get_daily_average_gas_price' sibling or date range limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_average_gas_priceBInspect

Get Daily Average Gas Price — Returns the daily average gas price used on the Ethereum network. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full disclosure burden. It adds valuable context by noting this is a 'PRO endpoint' (indicating authentication/tier requirements) and uses 'Returns' to signal a read-only operation. However, it lacks details on response format, pagination for large date ranges, or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two short statements. The information is front-loaded with the specific resource identifier, though the em-dash structure slightly fragments the flow. No redundant or wasted content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, date range query) and lack of output schema, the description adequately covers the core purpose but incompletely describes the return value structure. For a data retrieval tool without output schema, some hint of the returned data format (time-series list vs single value) would be expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds no semantic information about the parameters (e.g., expected date formats, that sort controls chronological ordering), but the schema fully documents the three parameters (startdate, enddate, sort).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (daily average gas price) and action (Get/Returns) on the Ethereum network. It implicitly distinguishes from siblings like `eth_gasprice` (current price) and `get_daily_average_gas_limit` by specifying 'gas price' and 'daily average', though it does not explicitly contrast with real-time alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this historical daily average tool versus real-time alternatives like `eth_gasprice` or `get_gas_oracle`. The only usage hint is 'PRO endpoint', indicating access tier requirements but not selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_average_network_difficultyAInspect

Get Daily Average Network Difficulty — Returns the historical mining difficulty of the Ethereum network. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses this is a 'PRO endpoint' (auth requirement) and 'historical' data (temporal scope), but lacks details on rate limits, date range constraints, data availability, or return format that would help the agent handle errors or pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient segments with zero waste: the function label, the em-dash explanation, and the PRO designation. Information is front-loaded. The fragment 'PRO endpoint.' is acceptable conciseness though slightly telegraphic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a straightforward data retrieval tool with fully documented schema (100% coverage) and clear differentiation from 10+ similar daily average metrics, the description is adequate. It identifies the specific metric (network difficulty), domain (Ethereum), and tier requirement (PRO).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting startdate, enddate, and sort parameters with examples. The description mentions 'Daily Average' which reinforces the date-range nature of the query, but does not add semantic details beyond the schema. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Returns' and clearly identifies the resource as 'historical mining difficulty of the Ethereum network'. It effectively distinguishes from numerous siblings like get_daily_average_network_hash_rate, get_daily_average_gas_price, etc. by specifying 'difficulty' specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes 'PRO endpoint' which provides important operational context about authentication requirements. However, it lacks explicit guidance on when to use this tool versus the many other get_daily_average_* siblings (e.g., hash rate vs difficulty), relying instead on the specific resource name to imply usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_average_network_hash_rateBInspect

Get Daily Average Network Hash Rate — Returns the historical measure of processing power of the Ethereum network. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full disclosure burden. It provides crucial behavioral context by identifying this as a 'PRO endpoint' (indicating authentication tier and potential rate limits) and 'historical' data (indicating temporal scope). However, it lacks details on output format, units (TH/s?), pagination, or error handling patterns given the absence of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise at one sentence plus a label. It is front-loaded with the action (Returns) and key resource identification. The em-dash structure is slightly informal but efficient, and there is no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter schema with no nested objects and no output schema, the description adequately covers the core function. However, it should describe the return value structure (e.g., daily records with timestamps and hash rate values) given the lack of an output schema to guide result handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (all three parameters have descriptions), the baseline score is 3. The description text adds no additional parameter semantics (e.g., valid date formats, sort behavior), but the schema adequately covers the mechanical definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns the historical measure of processing power of the Ethereum network,' providing a specific verb (Returns), resource (network hash rate/processing power), and scope (historical/daily average). It distinguishes itself from siblings like get_daily_average_gas_price or get_daily_average_network_difficulty by explicitly defining hash rate as 'processing power.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description only provides the constraint 'PRO endpoint,' implying authentication requirements but offers no explicit guidance on when to use this tool versus alternatives like get_daily_average_network_difficulty or transaction-based tools. It does not mention the required date range parameters or typical use cases for hash rate analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_average_time_for_a_block_to_be_included_in_theBInspect

Get Daily Average Time for A Block to be Included in the Ethereum Blockchain — Returns the daily average of time needed for a block to be successfully mined. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the 'PRO endpoint' requirement (important behavioral context) and implies a read-only data retrieval operation, but lacks details on data freshness, rate limiting, pagination, or the specific time units returned (seconds, minutes?).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first establishes the domain and action, the second explains the return value and access tier. No redundant words; every element serves a distinct purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the straightforward 3-parameter input schema (100% documented) and lack of output schema, the description adequately explains what data is returned (daily average mining time) and the PRO access requirement. Minor gap: it doesn't specify time units for the returned average.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear parameter descriptions and examples (e.g., '2019-02-01'). Since the schema fully documents the date format and sort options, the description doesn't need to add parameter details, meeting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the 'daily average of time needed for a block to be successfully mined' on Ethereum, distinguishing it from sibling 'get_daily_average_*' tools (e.g., gas price, block size) by specifying the block inclusion time metric. However, it doesn't explicitly clarify when to choose this over related tools like 'get_estimation_of_confirmation_time'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes this is a 'PRO endpoint,' indicating an access tier requirement, but provides no guidance on when to use this tool versus alternatives. It doesn't explain that this returns historical daily averages versus real-time estimates, or specify date range limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_block_count_and_rewardsBInspect

Get Daily Block Count and Rewards — Returns the number of blocks mined daily and the amount of block rewards. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It identifies this as a 'PRO endpoint' (useful for auth context) but omits data freshness, rate limits, whether date ranges are inclusive/exclusive, and the units for rewards (ETH, Gwei, wei).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact at one sentence plus a tag. However, 'Get Daily Block Count and Rewards' restates the tool name without adding information, slightly weakening the front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description should ideally hint at return structure (array of daily records? units? timestamps?), but it only specifies the two data points returned. Adequate for a simple retrieval tool but leaves operational gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with mechanical descriptions present for all parameters. The description adds no semantic clarifications (e.g., date format standards, that sort defaults to asc), but with complete schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns 'number of blocks mined daily and the amount of block rewards' with a specific verb and resources. However, it fails to distinguish from the sibling tool 'get_daily_block_rewards' (which appears to be a subset), leaving tool selection ambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus the similar 'get_daily_block_rewards' or other daily statistics tools. The 'PRO endpoint' mention hints at access tier but doesn't explain prerequisites or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_block_rewardsBInspect

Get Daily Block Rewards — Returns the amount of block rewards distributed to miners daily. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It provides useful behavioral context by identifying this as a 'PRO endpoint' (access tier). However, it lacks details on return structure (array vs object), pagination behavior, or data availability gaps despite having no output schema to provide such context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose statement. Two distinct information chunks separated by em-dash: the action/title and the behavioral description. 'PRO endpoint' placement at end provides necessary context without clutter. Minor redundancy with tool name in first phrase prevents a 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has simple parameter structure (3 flat params, 100% schema coverage) requiring minimal description. However, with no output schema provided, description should ideally detail return format (list of daily records? single aggregated value?) beyond just 'amount of block rewards'. PRO designation noted, but overall completeness is adequate, not exemplary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with clear parameter documentation. Description adds no additional semantic detail about parameters (e.g., date format expectations, sort field behavior). Baseline score of 3 applies since schema adequately documents parameters without description supplementation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States clear verb (Returns) + resource (block rewards) + scope (daily, to miners). Specifies this tracks miner distributions specifically. However, does not explicitly distinguish from sibling 'get_daily_block_count_and_rewards' which returns similar data plus block counts, potentially causing confusion on which to select.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains no guidance on when to use versus alternative tools (e.g., 'get_daily_block_count_and_rewards' or 'get_block_and_uncle_rewards_by_blockno'). While 'PRO endpoint' hints at access requirements, it does not state prerequisites, rate limits, or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_network_transaction_feeBInspect

Get Daily Network Transaction Fee — Returns the amount of transaction fees paid to miners per day. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it identifies the data returned, it omits critical details such as the time range returned (current day only?), response format, rate limiting specific to PRO endpoints, or whether this requires additional authentication parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with minimal waste, though the first clause 'Get Daily Network Transaction Fee' largely restates the tool name rather than adding new information. The 'PRO endpoint' tag is appropriately placed at the end.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should ideally describe the return structure (e.g., timestamped values, units in wei/ether). It mentions 'amount' but lacks format details, leaving gaps for a data retrieval tool with rich siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4. The description correctly implies no filtering is needed (returning aggregate daily data), though it could clarify what determines which day's data is returned.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the amount of transaction fees paid to miners per day, using specific verbs and identifying the resource. However, it does not explicitly differentiate this from similar daily metrics siblings like get_daily_average_gas_price or get_ethereum_daily_total_gas_used.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description identifies this as a 'PRO endpoint,' implying authentication or tier requirements, but provides no explicit guidance on when to use this tool versus other daily data tools, nor does it specify prerequisites beyond the endpoint level.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_network_utilizationAInspect

Get Daily Network Utilization — Returns the daily average gas used over gas limit, in percentage. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden. It successfully communicates the calculation methodology (gas used over gas limit) and the PRO endpoint restriction. However, it omits rate limits, maximum date ranges, or timezone handling that would be useful for a time-series data tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: two sentences with zero redundancy. Front-loaded with the return value explanation and ends with the critical 'PRO endpoint' constraint. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple data retrieval tool without output schema, the description adequately specifies what is returned (percentage value) and how it's calculated. The PRO endpoint notice provides necessary operational context. Only minor gaps remain regarding date range constraints or response format specifics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline score applies. The description implies daily granularity aligns with the date parameters but does not augment the schema's parameter documentation with additional context about date formats or sort behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the specific metric returned: 'daily average gas used over gas limit, in percentage.' This distinguishes it from sibling tools like get_daily_average_gas_limit or get_ethereum_daily_total_gas_used by specifying the exact calculation (utilization ratio) rather than raw values.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description identifies this as a 'PRO endpoint,' implying access tier requirements, but provides no explicit guidance on when to select this tool versus other daily metric tools (e.g., when network congestion analysis is needed). Usage is implied by the specific metric description but not explicitly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_new_address_countBInspect

Get Daily New Address Count — Returns the number of new Ethereum addresses created per day. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the 'PRO endpoint' access restriction but fails to address rate limits, data availability windows, whether results are cumulative or daily snapshots, or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with the em-dash header providing immediate context, followed by the function description and access tier. Slightly redundant with tool name in the header, but no wasted words. Three distinct informational units in minimal space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for a 3-parameter tool with full schema coverage, but gaps remain due to missing output schema description and lack of behavioral context. Competitors/context among 30+ similar daily metric tools not addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all three parameters have type descriptions and examples where relevant). The description itself adds no parameter-specific semantics, but the schema adequately documents startdate, enddate, and sort parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool returns the number of new Ethereum addresses created per day, using specific verb ('Returns') and resource ('new Ethereum addresses'). It effectively distinguishes itself from siblings like get_daily_transaction_count or get_daily_average_gas_price by specifying the 'new address' metric.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides minimal usage guidance. The 'PRO endpoint' mention indicates access tier requirements but lacks explicit when-to-use guidance, prerequisites (date formats), or alternatives among the many sibling get_daily_* tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_transaction_countCInspect

Get Daily Transaction Count — Returns the number of transactions performed on the Ethereum blockchain per day. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden. It only discloses the 'PRO endpoint' access tier. Missing critical behavioral details: return format (array vs object), rate limits, maximum date range constraints, and error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient clauses with zero waste. Front-loaded with the action and resource, followed by the access tier qualification. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description should explain the return structure (time series format, fields included). It fails to compensate for the missing output specification or describe the PRO authentication requirements in detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for sort, startdate, and enddate including format examples. The description adds no parameter semantics beyond the schema, meeting the baseline of 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Returns') and specific resource ('number of transactions performed on the Ethereum blockchain per day'). The description is specific enough to distinguish from siblings like get_daily_block_count or get_daily_gas_price, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Only provides the 'PRO endpoint' constraint hinting at access requirements, but lacks explicit when-to-use guidance, prerequisites (e.g., API key requirements for PRO), or alternatives for different granularities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_daily_uncle_block_count_and_rewardsAInspect

Get Daily Uncle Block Count and Rewards — Returns the number of 'Uncle' blocks mined daily and the amount of 'Uncle' block rewards. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the PRO endpoint restriction (auth requirement) and explains what data is returned (count and rewards). It lacks explicit mention of idempotency or error states, but covers the essential behavioral context for a read-only historical data endpoint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. The first sentence front-loads the core function, while the second provides critical access tier information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% input schema coverage and absence of an output schema, the description is appropriately complete. It explains the domain-specific concept of 'Uncle' blocks (count + rewards) and the PRO requirement, providing sufficient context for an agent to understand the tool's scope within the Ethereum data domain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds conceptual context by stating the tool returns 'daily' data, which semantically aligns with the 'startdate' and 'enddate' parameters, but does not add syntax details, format specifications, or behavioral constraints beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Returns the number of 'Uncle' blocks mined daily and the amount of 'Uncle' block rewards,' using specific verbs and resources. It clearly distinguishes itself from the sibling tool 'get_daily_block_count_and_rewards' by specifying 'Uncle' blocks, and from 'get_block_and_uncle_rewards_by_blockno' by emphasizing the daily aggregation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes this is a 'PRO endpoint,' indicating an access tier prerequisite, but lacks explicit guidance on when to use this daily aggregate tool versus the block-specific alternative ('get_block_and_uncle_rewards_by_blockno') or other sibling tools. Usage must be inferred from the 'daily' keyword.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_erc20_token_account_balance_for_tokencontractaddressCInspect

Get ERC20-Token Account Balance for TokenContractAddress — Returns the current balance of an ERC-20 token of an address.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesquery parameter: address (string)
contractaddressYesquery parameter: contractaddress (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds 'current' to indicate real-time data (distinguishing from historical), but omits critical behavioral details like error handling for invalid addresses, decimal precision in returns, rate limits, or what constitutes a valid ERC-20 contract.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two brief sentences with minimal redundancy. The first sentence echoes the tool name somewhat ('Get ERC20-Token Account Balance for TokenContractAddress'), but the em-dash structure separates the action from the explanation efficiently. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple balance retrieval with two well-documented parameters. However, lacking output schema and annotations, the description should ideally clarify return format (e.g., raw wei vs formatted) or decimal handling. Sufficient but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'of an address' which loosely maps to the address parameter, but provides no additional semantic context beyond the schema's basic 'query parameter' labels. No compensation needed for missing schema docs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns the 'current balance of an ERC-20 token of an address,' providing specific verb (Get/Returns) and resource. However, it does not explicitly distinguish from siblings like `get_address_erc20_token_holding` (all tokens) or `get_historical_erc20_token_account_balance_for` (historical data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as `get_historical_erc20_token_account_balance_for` for past balances or `get_address_erc20_token_holding` for retrieving all token holdings. No prerequisites or conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_erc20_token_totalsupply_by_contractaddressCInspect

Get ERC20-Token TotalSupply by ContractAddress — Returns the current amount of an ERC-20 token in circulation.

ParametersJSON Schema
NameRequiredDescriptionDefault
contractaddressYesquery parameter: contractaddress (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it notes the operation returns the 'current amount', it fails to specify critical behavioral details such as the return format (e.g., raw integer, string, decimal-adjusted), units (wei vs. token units), or whether the operation is read-only and idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief with two efficient clauses. The first mirrors the tool name/action, and the second defines the return value concept ('amount in circulation'). There is no redundant or filler text, though the em-dash structure slightly reduces clarity compared to distinct sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter) and complete schema coverage, the description adequately explains the core concept. However, without an output schema or annotations, the omission of return value format and units prevents it from being fully complete for a financial data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline score is 3. The description mentions 'by ContractAddress' which aligns conceptually with the 'contractaddress' parameter, but it adds no additional semantic context (e.g., explaining this must be a valid ERC-20 contract address) beyond what the schema's mechanical description already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the total supply of an ERC-20 token using a contract address, using specific verbs ('Get', 'Returns') and resource terminology ('TotalSupply', 'in circulation'). It implicitly distinguishes from siblings like 'get_erc20_token_account_balance' (total vs. individual holdings) and 'get_historical_erc20_token_totalsupply' (current vs. historical) via the phrase 'current amount'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_erc20_token_account_balance_for_tokencontractaddress' (for wallet balances) or 'get_historical_erc20_token_totalsupply_by_contractaddress_and' (for past data). There are no prerequisites mentioned (e.g., requiring a valid ERC-20 contract address) or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_estimated_block_countdown_time_by_blocknoBInspect

Get Estimated Block Countdown Time by BlockNo — Returns the estimated time remaining, in seconds, until a certain block is mined.

ParametersJSON Schema
NameRequiredDescriptionDefault
blocknoYesquery parameter: blockno (number)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clarifies the return unit (seconds) and that the value is an estimate, but lacks critical context: behavior when the block number is in the past, how the estimation is calculated, or what indicates an already-mined block.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact at one sentence. However, the leading clause 'Get Estimated Block Countdown Time by BlockNo' largely restates the tool name rather than front-loading unique value. The em-dash construction is slightly formal but acceptable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool without output schema or annotations, the description adequately explains the return value format. However, it is incomplete regarding edge cases (past blocks, invalid block numbers) and lacks behavioral context that annotations would typically provide, leaving gaps for a production blockchain tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, documenting 'blockno' as a number query parameter. The description references 'BlockNo' but adds no semantic meaning beyond the schema (e.g., whether it accepts hex, decimal, or future-only values). This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns an estimated time remaining in seconds until a specific block is mined. It uses a specific verb-resource pattern ('Get...Block Countdown Time'), though it could better distinguish from the sibling tool 'get_estimation_of_confirmation_time' which handles transaction confirmation rather than block mining.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_estimation_of_confirmation_time' or 'eth_getblockbynumber'. It does not mention prerequisites (e.g., that the block number should be in the future) or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_estimation_of_confirmation_timeBInspect

Get Estimation of Confirmation Time — Returns the estimated time, in seconds, for a transaction to be confirmed on the blockchain.

ParametersJSON Schema
NameRequiredDescriptionDefault
gaspriceYesquery parameter: gasprice (number)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden of behavioral disclosure. It specifies the return unit (seconds) and the concept being estimated, but omits safety characteristics (read-only vs write), rate limits, or whether the estimation is based on real-time network conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact and front-loaded, efficiently communicating the return value. The em-dash structure is slightly awkward but does not significantly impede clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description adequately explains what is returned, but lacks operational context (e.g., API endpoint nature) that would be necessary given the absence of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'gasprice' parameter, the baseline is met. However, the tool description itself adds no semantic context about the parameter (e.g., what units the gas price should be in, or why it affects confirmation time).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns an estimated confirmation time in seconds for blockchain transactions, adding specificity with the unit and context. However, it does not explicitly differentiate this time-estimation function from sibling tools like eth_estimategas (which estimates gas limits).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as eth_estimategas or get_gas_oracle, nor does it mention prerequisites like obtaining a gas price value first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ether_balance_for_a_single_addressBInspect

Get Ether Balance for a Single Address — Returns the Ether balance of a given address.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag ("earliest" | "latest" | "pending")
addressYesquery parameter: address (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only states that it 'Returns the Ether balance' without clarifying what the 'tag' parameter does (block state selection), the return format (wei/ether), or whether the operation is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured and front-loaded, though slightly redundant as it essentially repeats the tool name's meaning in sentence form ('Get Ether Balance... Returns the Ether balance').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only two parameters with complete schema coverage, the description is minimally adequate, but gaps remain regarding the return value format (critical since no output schema exists) and the behavioral impact of the 'tag' parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds no semantic meaning for the 'address' or 'tag' parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Get/Returns) and resource (Ether Balance), and explicitly scopes it to 'Single Address' which distinguishes it from the sibling tool 'get_ether_balance_for_multiple_addresses_in_a_single_call'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Single Address' scope provides implicit differentiation from the batch balance tool, but there is no explicit guidance on when to use this versus the multiple-address alternative, nor explanation of the 'tag' parameter's use for querying historical vs. latest state.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ether_balance_for_multiple_addresses_in_a_single_callBInspect

Get Ether Balance for Multiple Addresses in a Single Call — Returns the balance of the accounts from a list of addresses.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagYesquery parameter: tag ("earliest" | "latest" | "pending")
addressNoquery parameter: address (string[])
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description fails to disclose critical behavioral traits such as whether this is a read-only operation, the units of returned balances (wei vs ether), the return data structure, or rate limiting. It only states that it 'Returns the balance' without elaboration.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with the action front-loaded. However, it is slightly redundant: the second clause ('Returns the balance... from a list of addresses') largely restates the first clause ('Get Ether Balance for Multiple Addresses').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema, the description lacks details about the return format (e.g., mapping structure, units). While the schema covers parameters well, the absence of behavioral context (read-only status, data format) leaves gaps for an agent trying to understand the full contract of the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents both the 'tag' (with enum values) and 'address' parameters. The description mentions 'list of addresses' which aligns with the schema but does not add significant semantic meaning beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Get Ether Balance), resource (accounts/addresses), and scope (Multiple Addresses in a Single Call). The 'Multiple Addresses' phrasing effectively distinguishes it from the sibling tool 'get_ether_balance_for_a_single_address'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the phrase 'in a Single Call' implies efficiency benefits for batch operations, there is no explicit guidance on when to use this versus the single address alternative, nor are prerequisites or exclusions stated. The usage is implied but not directive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ethereum_daily_total_gas_usedCInspect

Get Ethereum Daily Total Gas Used — Returns the total amount of gas used daily for transctions on the Ethereum network. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only discloses the 'PRO endpoint' access tier. It fails to mention rate limits, data availability/historical range, timezone handling, aggregation methodology, or error behaviors (e.g., invalid date ranges).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief at two fragments, but the repetition of the tool name before the em-dash wastes space, and the typo 'transctions' detracts from professionalism. The 'PRO endpoint' note is valuable and well-placed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a data retrieval tool with no output schema, the description inadequately covers what constitutes 'daily' totals (UTC boundaries?), return value structure, or pagination. Given the existence of 40+ sibling daily metric tools, additional context on data granularity would be expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The tool description adds no parameter-specific guidance (e.g., date format details, sort behavior), but the schema adequately documents the three parameters with basic type info and examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (daily total gas used on Ethereum network) and action (Returns), and distinguishes it from siblings like get_daily_average_gas_price by specifying 'total amount of gas used'. However, it contains a typo ('transctions') and the leading phrase 'Get Ethereum Daily Total Gas Used' redundantly restates the tool name before the em-dash.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Only implicit guidance is provided via 'PRO endpoint', hinting at authentication requirements. There is no explicit guidance on when to use this versus other daily metrics (like get_daily_network_transaction_fee), no mention of date range limits, and no prerequisites beyond the PRO status.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ethereum_nodes_sizeCInspect

Get Ethereum Nodes Size — Returns the size of the Ethereum blockchain, in bytes, over a date range.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
syncmodeYesquery parameter: syncmode ("default" | "Default" | "archive" | "Archive")
startdateYesquery parameter: startdate (string)
clienttypeYesquery parameter: clienttype ("geth" | "Geth" | "parity" | "Parity")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states the return value (size in bytes) and date range scoping. Missing critical context: whether the size is cumulative or daily, how archive vs default sync modes affect the returned data, rate limits, or data granularity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses an em-dash structure with title case prefix that duplicates the tool name, followed by a functional description. While brief at one sentence, the redundancy and lack of structured formatting (bolding, line breaks) make it slightly less scannable than it could be.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should further explain what the returned 'size' represents (e.g., disk usage per node type vs raw blockchain data). It omits the significance of the clienttype/syncmode discrimination, which is central to this tool's utility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with clear enum values for clienttype and syncmode. The description mentions 'date range' which loosely contextualizes startdate and enddate together, but adds no semantic value for the sort parameter or the significance of filtering by different client implementations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns blockchain size in bytes over a date range, using specific verb 'Returns'. Distinguishes from sibling get_total_nodes_count (which counts nodes) by specifying it returns size data in bytes. However, the 'Get Ethereum Nodes Size' prefix is redundant with the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus siblings like get_daily_average_block_size or get_total_nodes_count. Doesn't explain that clienttype and syncmode parameters are required filters for querying historical node storage metrics, leaving the user to infer usage from the schema alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ether_historical_daily_market_capCInspect

Get Ether Historical Daily Market Cap — Returns the historical Ether daily market capitalization. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. 'PRO endpoint' hints at access restrictions, but lacks rate limits, max date ranges, data availability, or return format details expected for a data retrieval tool without output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, efficiently structured. Front-loaded with purpose. Slight redundancy between title-case header and 'Returns...' clause, but 'PRO endpoint' adds distinct value without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 3-parameter read-only tool with 100% schema coverage. 'PRO endpoint' acknowledgment partially compensates for missing annotations, but lacks output description or date range constraints given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with examples (date formats) and valid values (sort asc/desc). Description adds no parameter-specific semantics, but baseline 3 is appropriate since schema fully documents inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Returns') and specific resource ('historical Ether daily market capitalization'). Distinguishes from sibling get_ether_historical_price by specifying 'Market Cap' metric. However, 'Get Ether Historical Daily Market Cap' clause largely restates the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions 'PRO endpoint' indicating access tier restrictions, but provides no guidance on when to use this vs. sibling alternatives (e.g., get_ether_historical_price) or prerequisites like API key requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ether_historical_priceCInspect

Get Ether Historical Price — Returns the historical price of 1 ETH. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoquery parameter: sort ("asc" | "desc")
enddateYesquery parameter: enddate (string)
startdateYesquery parameter: startdate (string)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It mentions 'PRO endpoint' which reveals an access/auth requirement, but fails to describe the return structure (daily price points? JSON format?), currency denomination (USD?), rate limits, or error behaviors. With zero annotation coverage, this is insufficient behavioral disclosure for a data retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is terse and front-loaded with the action ('Returns'). However, the lead phrase 'Get Ether Historical Price' largely restates the tool name before the em-dash, slightly wasting space. 'PRO endpoint' is appropriately placed as a constraint tag at the end.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has a simple flat schema with three parameters and no nested objects, the description is minimally adequate. It correctly identifies the PRO status which is critical context. However, it omits expectations about the return value structure and timeframe granularity that would complete the agent's understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with all parameters documented. The description adds no additional semantic context about the date format (YYYY-MM-DD implied by examples but not stated) or that 'sort' applies to chronological ordering. Baseline score of 3 is appropriate given the schema does the descriptive work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns the historical price of 1 ETH', using specific verb and resource. It distinguishes from sibling 'get_ether_last_price' by using 'historical' versus 'last', and from 'get_ether_historical_daily_market_cap' by focusing on price rather than market cap. However, it misses the crucial detail that this requires a date range and likely returns a time series, not a single value.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Only provides 'PRO endpoint' as a usage constraint, hinting at access requirements. It fails to specify when to use this versus 'get_ether_last_price' (current vs historical) or clarify the required date range parameters that define the query window. No guidance on date format or valid ranges is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ether_last_priceBInspect

Get Ether Last Price — Returns the latest price of 1 ETH.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden but fails to specify the quote currency (USD, BTC, etc.), data source, price staleness tolerance, or availability guarantees. These behavioral traits are essential for a price-feed tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief for a simple parameterless tool, using a front-loaded structure (action followed by explanation). The em-dash format is slightly informal but efficient with no redundant content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While adequate for basic invocation, the description lacks critical domain context for financial data: the currency pair (ETH/USD implied but unspecified), precision, and whether the price is an average, spot, or weighted value. Given the tool's simplicity, this is a gap but not a critical failure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4. The description correctly implies no filtering or configuration is needed to retrieve the global ETH price, consistent with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the latest price of 1 ETH using specific verbs (Get/Returns) and identifies the resource. However, it does not explicitly distinguish from the sibling tool 'get_ether_historical_price', leaving potential ambiguity about whether 'last' means current/latest or previous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus 'get_ether_historical_price' or other pricing tools. No prerequisites, rate limit warnings, or context about data freshness are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_event_logs_by_addressBInspect

Get Event Logs by Address — Returns the event logs from an address, with optional filtering by block range.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
toBlockYesquery parameter: toBlock (number)
fromBlockYesquery parameter: fromBlock (number)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It indicates a read operation ('Returns'), implying safety, but fails to disclose pagination behavior, maximum block range limits, rate limiting, or the structure/format of the returned event logs. It meets the minimum by indicating data retrieval but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief (one sentence). The em-dash structure leads with a title-like fragment that slightly repeats the tool name, but the content is efficient with no redundant phrases. Every clause conveys necessary scope (address targeting, block range optionality).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 primitive parameters with full schema coverage and no output schema, the description adequately covers the core function but has clear gaps. It should clarify the output format (since no output schema exists) and explicitly state this retrieves logs without topic filtering to differentiate from siblings. The lack of pagination guidance is a minor gap given the schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds semantic value by grouping fromBlock and toBlock as a 'block range,' explaining their relationship. It does not add meaning for page/offset beyond the schema, nor does it clarify address format beyond the schema's example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves (Returns) event logs associated with a specific address and mentions block range filtering. However, it fails to distinguish from the closely named sibling `get_event_logs_by_address_filtered_by_topics`, leaving ambiguity about whether this tool returns all logs or only unfiltered ones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes that block range filtering is 'optional,' providing implied guidance for the fromBlock/toBlock parameters. However, it lacks explicit guidance on when to use pagination (page/offset) versus retrieving full ranges, and critically omits when to prefer this tool over `get_event_logs_by_address_filtered_by_topics` or `get_event_logs_by_topics`.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_event_logs_by_address_filtered_by_topicsCInspect

Get Event Logs by Address filtered by Topics — Returns the event logs from an address, filtered by topics and block range.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
topicYesquery parameter: topic ("topic0" | "topic1" | "topic2" | "topic3")
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
toBlockYesquery parameter: toBlock (number)
fromBlockYesquery parameter: fromBlock (number)
topicOperatorNoquery parameter: topicOperator ("and" | "or")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to confirm if this is a read-only operation, does not explain the pagination behavior (despite `page` and `offset` parameters), and does not describe the structure of the returned event logs or whether the operation is rate-limited.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured as a single sentence with an em-dash, front-loading the action and resource. It avoids redundancy despite the tool name being descriptive, and every word serves to clarify the scope (address + topics + block range).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately identifies the return type (event logs) but should specify that this queries historical blockchain data and mention the pagination support implied by the parameters. It meets minimum viability but leaves significant gaps for a 7-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting all 7 parameters including the `topic` indices (0-3) and `topicOperator` ('and'/'or'). The description mentions 'filtered by topics and block range' which aligns with the schema, but adds no additional semantic context (e.g., that `offset` controls pagination size) beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (event logs) and the dual filtering mechanism (by address and by topics), distinguishing it from siblings like `get_event_logs_by_address` or `get_event_logs_by_topics` which likely filter by only one criterion. However, it does not explicitly state this distinction for sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling tools `get_event_logs_by_address` or `get_event_logs_by_topics`, nor does it mention prerequisites like knowing the contract address or topic hashes. It simply states what the tool does, not when to choose it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_event_logs_by_topicsCInspect

Get Event Logs by Topics — Returns the events log in a block range, filtered by topics.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
topicYesquery parameter: topic ("topic0" | "topic1" | "topic2" | "topic3")
offsetNoquery parameter: offset (number)
toBlockYesquery parameter: toBlock (number)
fromBlockYesquery parameter: fromBlock (number)
topicOperatorNoquery parameter: topicOperator ("and" | "or")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It does not indicate whether this is a read-only operation, what the return format looks like (ABI-decoded vs raw), pagination behavior despite having page/offset parameters, or constraints on maximum block range queries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief with two clauses, though the first clause essentially restates the tool name. No redundant or wasted sentences, but could be more information-dense given the lack of annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description fails to compensate by describing the return structure (event log format), Ethereum-specific context, or behavioral constraints. For a 6-parameter blockchain data tool, this leaves significant gaps in the agent's understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing mechanical documentation for all 6 parameters. The description adds minimal semantic value beyond the schema, merely noting 'filtered by topics' without explaining the topic0-3 enumeration or how topicOperator logic (AND/OR) affects the query.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear action (Returns/Get) with specific resource (events log) and scope (block range, filtered by topics). However, it fails to distinguish from similar sibling tools like `get_event_logs_by_address_filtered_by_topics` or `get_event_logs_by_address`, leaving ambiguity about which event log tool to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the 15+ sibling transaction/log tools, particularly `get_event_logs_by_address_filtered_by_topics` which appears functionally similar. No prerequisites, rate limit warnings, or block range constraints are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_gas_oracleBInspect

Get Gas Oracle — Returns the current Safe, Proposed and Fast gas prices. Post EIP-1559 🔥 changes :Safe/Proposed/Fast gas price recommendations are now modeled as Priority Fees.New field suggestBaseFee , the baseFee of the next pending blockNew field gasUsedRatio, to estimate how busy the network isLearn more about the gas changes in EIP-1559.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It successfully explains the semantic meaning of return fields (modeling Safe/Proposed/Fast as Priority Fees, defining suggestBaseFee and gasUsedRatio), but fails to disclose read-only safety, error conditions, or rate limiting behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the length is appropriate, the formatting contains errors that reduce clarity: missing spaces ('changes :Safe', 'blockNew'), inconsistent punctuation, and run-on sentences without proper breaks between concepts (e.g., 'Priority Fees.New field').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description partially compensates by listing return fields (Safe/Proposed/Fast, suggestBaseFee, gasUsedRatio) and their meanings. However, it lacks data types, response structure details, or error response documentation that would make it fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, triggering the baseline score of 4 per evaluation rules. The description correctly does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear action ('Returns') and specific resource ('Safe, Proposed and Fast gas prices') alongside EIP-1559-specific fields (suggestBaseFee, gasUsedRatio). However, it does not explicitly differentiate this from the sibling eth_gasprice tool, though the EIP-1559 details provide implicit distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives like eth_gasprice or eth_estimategas. It mentions 'Learn more about the gas changes' but provides no criteria for selecting this oracle over other gas price methods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_historical_erc20_token_account_balance_forAInspect

Get Historical ERC20-Token Account Balance for TokenContractAddress by BlockNo — Returns the balance of an ERC-20 token of an address at a certain block height. 📝 Note : This endpoint is throttled to 2 calls/second regardless of API Pro tier. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesquery parameter: address (string)
blocknoYesquery parameter: blockno (number)
contractaddressYesquery parameter: contractaddress (string)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses critical operational constraints: the 2 calls/second rate limit and PRO endpoint requirement. However, it omits details on read-only safety confirmation, pagination, or invalid block number behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally structured with two efficient sentences: the first states purpose and return value, the second front-loads the critical rate limit constraint. No redundancy or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with no output schema, the description adequately covers the return value ('Returns the balance'), required parameters, and tier restrictions. Minor gap in not describing the balance return format (wei, ether, etc.).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100%, the baseline descriptions ('query parameter: X') are tautological. The description adds semantic value by mapping 'contractaddress' to 'TokenContractAddress' and 'blockno' to historical block height context, clarifying the relationship between parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('Historical ERC20-Token Account Balance'), clearly distinguishing from siblings like 'get_erc20_token_account_balance_for_tokencontractaddress' (current balance) and 'get_historical_ether_balance_for_a_single_address_by_blockno' (ETH vs ERC20) via explicit mention of 'Historical' and 'by BlockNo'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the use case (historical queries at a specific block height) through 'by BlockNo' and 'at a certain block height', but lacks explicit guidance on when to prefer this over the current-balance sibling or error handling guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_historical_erc20_token_totalsupply_by_contractaddress_andAInspect

Get Historical ERC20-Token TotalSupply by ContractAddress & BlockNo — Returns the amount of an ERC-20 token in circulation at a certain block height. 📝 Note : This endpoint is throttled to 2 calls/second regardless of API Pro tier. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
blocknoYesquery parameter: blockno (number)
contractaddressYesquery parameter: contractaddress (string)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully reveals the rate limiting (2 calls/second) and tier requirements (PRO endpoint). It also explains the return value conceptually ('amount of an ERC-20 token in circulation'). It lacks detail on error conditions or maximum queryable block depth, but covers the critical operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently front-loaded with the main action and scope, followed by a clear em-dash explanation, then a note section for operational constraints. The only structural awkwardness is the trailing 'PRO endpoint' fragment, which could be integrated into the note sentence for smoother flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description appropriately explains what is returned ('amount...in circulation'). The schema is fully documented (100% coverage), and the description supplements this with critical operational context (throttling, PRO tier). It sufficiently covers a read-only data retrieval endpoint without overstating unknown behaviors.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both parameters (contractaddress and blockno), establishing a baseline of 3. The description mentions 'ContractAddress & BlockNo' which reinforces the parameter purposes, but does not add significant semantic details like address format requirements or whether blockno accepts 'latest' vs strictly historical numbers beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Get Historical ERC20-Token TotalSupply by ContractAddress & BlockNo' with specific verb (Get) and resource (Historical ERC20-Token TotalSupply). It clearly distinguishes from sibling get_erc20_token_totalsupply_by_contractaddress by emphasizing 'Historical' and 'BlockNo', and from get_historical_erc20_token_account_balance_for by specifying 'TotalSupply' rather than account balance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description identifies this as a 'PRO endpoint' and discloses the 2 calls/second throttling limit, which aids in usage decisions. However, it lacks explicit guidance on when to use this versus the non-historical get_erc20_token_totalsupply_by_contractaddress alternative (i.e., when historical block-specific data is required vs current state).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_historical_ether_balance_for_a_single_address_by_blocknoAInspect

Get Historical Ether Balance for a Single Address By BlockNo — Returns the balance of an address at a certain block height. 📝 Note : This endpoint is throttled to 2 calls/second regardless of API Pro tier. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesquery parameter: address (string)
blocknoYesquery parameter: blockno (number)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses critical behavioral constraints: the rate limit ('throttled to 2 calls/second') and access tier requirements ('PRO endpoint'). However, it omits details about return value format (wei vs ether) or error behavior for future blocks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with the purpose front-loaded in the first clause, followed by a clearly demarcated note section for operational constraints. No sentences are wasted; every element serves either functional documentation or critical usage constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the two straightforward parameters and lack of output schema, the description adequately covers operational context through the rate limiting and PRO tier disclosures. A minor gap remains regarding the return value format (e.g., whether balance is returned in wei or ether).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage, the descriptions are tautological ('query parameter: address'). The description adds valuable semantic context by mapping 'blockno' to the concept of 'block height', aiding agent understanding of what the number represents in the blockchain context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Returns the balance') and resource ('address at a certain block height'), and effectively distinguishes this from the sibling tool 'get_ether_balance_for_a_single_address' by emphasizing 'Historical' and 'By BlockNo' functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the name and 'Historical' label imply this is for past state lookups versus current balance, there is no explicit guidance on when to use this tool versus the sibling 'get_ether_balance_for_a_single_address', nor any mention of prerequisites like knowing the specific block number in advance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_internal_transactions_by_block_rangeBInspect

Get "Internal Transactions" by Block Range — Returns the list of internal transactions performed within a block range, with optional pagination. ​​ 📝 Note : This API endpoint returns a maximum of 10000 records only.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
sortNoquery parameter: sort ("asc" | "desc")
offsetNoquery parameter: offset (number)
endblockYesquery parameter: endblock (number)
startblockYesquery parameter: startblock (number)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the 10,000 record limit and pagination behavior, but fails to mention other critical behavioral traits like read-only nature, rate limiting, or the specific structure/content of internal transaction records.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core action, followed by pagination context and a clearly marked note about the record limit. The structure is efficient with minimal redundancy, though the emoji formatting is optional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately mentions that it returns a 'list of internal transactions' and notes the pagination limit. However, it lacks detail on what constitutes an internal transaction or what fields the returned records contain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'optional pagination' which loosely corresponds to the page/offset parameters, and 'block range' covering startblock/endblock, but does not add significant semantic detail beyond the schema's generic 'query parameter' descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves internal transactions by block range using specific verbs ('Get', 'Returns'). It distinguishes from siblings like 'get_a_list_of_internal_transactions_by_address' by explicitly scoping the operation to 'Block Range', though it does not explicitly name the alternative tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'optional pagination' which implies handling large result sets, but provides no explicit guidance on when to use this block-range query versus the address-based or transaction-hash-based sibling tools. No 'when-not' or prerequisite guidance is included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_internal_transactions_by_transaction_hashBInspect

Get 'Internal Transactions' by Transaction Hash — Returns the list of internal transactions performed within a transaction. 📝 Note : This API endpoint returns a maximum of 10000 records only.

ParametersJSON Schema
NameRequiredDescriptionDefault
txhashYesquery parameter: txhash (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and successfully notes the 10,000 record maximum limit. It states that the tool returns a list of internal transactions performed within the specified transaction. However, it omits details about pagination, error handling for invalid hashes, or the structure of the returned records.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description efficiently conveys the tool's purpose and limitation in two sentences without redundant wording. The emoji note marker draws attention to the record limit without cluttering the text. Every sentence serves a distinct purpose: the first defines the operation and return value, while the second specifies the constraint.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter retrieval tool without output schema, the description adequately covers the operation scope and result limitations. It appropriately omits output field details since no output schema exists to constrain them. The description could be improved by briefly defining internal transactions to distinguish them from regular transaction data available through sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with the txhash parameter fully documented including an example value. The description does not add semantic details about the parameter format or validation rules beyond what the schema provides. Given the complete schema coverage, the description appropriately focuses on behavioral aspects rather than repeating parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool retrieves internal transactions using a transaction hash as the lookup key, distinguishing it from sibling tools like get_a_list_of_internal_transactions_by_address. It identifies the specific resource (internal transactions) and the access method (by transaction hash). However, it assumes familiarity with Ethereum internal transactions rather than defining them as contract-to-contract message calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as get_a_list_of_internal_transactions_by_address or get_internal_transactions_by_block_range. It lacks prerequisites or conditions that would help an agent select this endpoint over other transaction-related tools. No comparison to eth_gettransactionbyhash is provided to clarify the difference between regular and internal transactions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_list_of_blocks_validated_by_addressCInspect

Get list of Blocks Validated by Address — Returns the list of blocks validated by an address.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
offsetNoquery parameter: offset (number)
addressYesquery parameter: address (string)
blocktypeYesquery parameter: blocktype ("blocks" | "uncles")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to deliver. It does not explain pagination behavior (despite having `page` and `offset` parameters), rate limits, authentication requirements, or what the return structure looks like (array of block hashes vs. full objects).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief but structurally redundant, using an em-dash to separate a title fragment from a sentence that essentially repeats the same information ('Get list...' vs 'Returns the list...'). While not verbose, the redundancy means not every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should compensate by explaining the return format and key behavioral constraints. With four parameters including pagination controls, the tool likely returns large datasets, yet the description provides no warning about result size limits or data structure, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline score. The description mentions 'by an address' which aligns with the required `address` parameter, but adds no context about the pagination parameters (`page`, `offset`) or how `blocktype` filters the results beyond the schema's own description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool retrieves blocks validated by an address, but it largely restates the tool name with minimal elaboration. It does not distinguish this from sibling tools like `eth_getblockbynumber` or `get_block_number_by_timestamp`, nor does it clarify what 'validated' means in context (e.g., mining vs. staking).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., address format requirements) or suggest when `blocktype` should be 'blocks' vs 'uncles', leaving the agent to infer usage from parameter names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_token_holder_list_by_contract_addressBInspect

Get Token Holder List by Contract Address — Return the current ERC20 token holders and number of tokens held.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoquery parameter: page (number)
offsetNoquery parameter: offset (number)
contractaddressYesquery parameter: contractaddress (string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full behavioral disclosure burden. It successfully indicates the data returned (current holders and token quantities) but fails to explain operational behaviors like pagination limits, sorting order, or how non-ERC20 contracts are handled. The mention of 'current' provides some temporal context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description efficiently delivers the essential information in two clauses without excessive verbosity, though the repetition between 'Get Token Holder List' and 'Return... token holders' is slightly redundant. The structure appropriately front-loads the key resource identifier.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately covers the return data conceptually (holders and balances) but omits practical details essential for correct invocation, such as pagination behavior or the typical scale of results. It meets minimum viability but leaves operational gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (even if minimal/tautological), establishing a baseline score of 3. The description adds no supplemental semantic information about the pagination parameters (page/offset) or the contract address format expectations beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the operation (retrieve/return) and resource (ERC20 token holders and their balances), including the 'ERC20' standard to distinguish from NFT token tools. However, it misses the opportunity to explicitly differentiate from the inverse sibling operation `get_address_erc20_token_holding`, which could cause confusion about whether this queries token holdings for an address or holders of a token contract.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to select this tool versus alternatives like `get_erc20_token_totalsupply_by_contractaddress` or `get_address_erc20_token_holding`, nor does it mention prerequisites such as needing a verified contract address or handling pagination for large holder lists.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_token_info_by_contractaddressAInspect

Get Token Info by ContractAddress — Returns project information and social media links of an ERC20/ERC721/ERC1155 token. 📝 Note : This endpoint is throttled to 2 calls/second regardless of API Pro tier. PRO endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
contractaddressYesquery parameter: contractaddress (string)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral burden. It successfully discloses critical runtime constraints: 'throttled to 2 calls/second regardless of API Pro tier' and 'PRO endpoint' access tier. Missing only error-handling behavior or data freshness details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient two-sentence structure: first sentence front-loads purpose and return value categories; second sentence delivers critical operational constraints (throttling). No redundant text or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema and annotations, description compensates by outlining return value categories ('project information and social media links') and operational constraints (rate limits, PRO tier). Only minor gap is lack of error scenario documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the parameter described as 'query parameter: contractaddress (string)'. Description implies the address refers to ERC20/ERC721/ERC1155 contracts, adding token-standard context. With high schema coverage, baseline 3 is appropriate as the schema adequately documents the input format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states it 'Returns project information and social media links' for ERC20/ERC721/ERC1155 tokens. The specific resource (project info/social links) clearly distinguishes it from siblings like get_erc20_token_totalsupply_by_contractaddress (supply data) or get_contract_source_code_for_verified_contract_source_codes (technical code).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies applicable token standards (ERC20/ERC721/ERC1155) and notes it's a 'PRO endpoint,' implying access requirements. However, lacks explicit guidance on when to use this versus get_contract_creator_and_creation_tx_hash or other contract metadata tools, and doesn't state exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_total_nodes_countBInspect

Get Total Nodes Count — Returns the total number of discoverable Ethereum nodes.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify the return format (integer, string, object?), the definition of 'discoverable,' caching behavior, or whether the count is estimated or exact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence structure is efficient and front-loaded. Minor deduction for tautologically including the tool name at the start rather than providing additional semantic value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description identifies what quantity is returned (total count), it lacks output format specification which would be helpful given the absence of an output schema. Adequate but minimal for a simple read-only endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters, triggering the baseline score of 4 per evaluation rules. The description correctly implies no filtering or input is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns the count of discoverable Ethereum nodes, distinguishing it from sibling tools like `get_ethereum_nodes_size` (which implies storage metrics). However, it wastefully restates the tool name at the start ('Get Total Nodes Count') rather than diving straight into the specific resource being accessed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus querying specific node details, or whether this reflects real-time data versus cached estimates. No mention of rate limits or network specificity (mainnet vs testnet).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_total_supply_of_etherAInspect

Get Total Supply of Ether — Returns the current amount of Ether in circulation excluding ETH2 Staking rewards and EIP1559 burnt fees.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the calculation scope and exclusions but lacks details on return value format, units (wei vs. ether), caching behavior, or rate limiting typical for blockchain APIs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence front-loaded with the action ('Get Total Supply') and immediately qualified with the return value description and exclusions. No redundant or wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter supply query tool, the description provides adequate domain context regarding what the metric represents. Minor gap: does not specify return value format or units, though this is somewhat mitigated by the specific numerical context provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero properties, meeting baseline expectations. The description correctly implies this is a parameterless retrieval operation requiring no user inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly identifies the resource (Ether in circulation) and specific calculation methodology (excluding ETH2 staking rewards and EIP1559 burnt fees), which effectively distinguishes this from the sibling tool 'get_total_supply_of_ether_2'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The specification of exclusions (ETH2 rewards and burnt fees) provides implicit guidance on when to use this endpoint versus alternatives, but lacks explicit 'when to use' statements or direct comparison to the _2 sibling variant.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_total_supply_of_ether_2AInspect

Get Total Supply of Ether 2 — Returns the current amount of Ether in circulation, ETH2 Staking rewards, EIP1559 burnt fees, and total withdrawn ETH from the beacon chain.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses what data is returned (circulation, rewards, fees, withdrawals) but lacks operational details like caching behavior, data freshness, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with em-dash structure efficiently separates the action from the specific return components. Every clause provides specific value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by enumerating the four specific data components returned. Adequate completeness for a parameterless data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, establishing baseline score of 4. Description does not need to elaborate on parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Returns' and clear resource identification. Explicitly lists ETH2 components (staking rewards, EIP1559 burnt fees, beacon chain withdrawals) that distinguish it from sibling get_total_supply_of_ether.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for ETH2-era supply data through component listing (EIP1559, beacon chain), but does not explicitly state when to choose this over sibling get_total_supply_of_ether or provide explicit selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_source_codeAInspect

Verify Source Code — Submits a contract source code to an Etherscan-like explorer for verification. 🌐 Tutorial : A full walk through of submitting multichain contract verification.📝 Note : This endpoint is limited to 100 verifications/day, regardless of API PRO tier.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully communicates rate limiting and API tier behavior, but omits critical mutation details such as whether the operation is asynchronous (implied by existence of status check sibling), what the response contains, or required authentication scopes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The emoji-prefixed notes (Tutorial and Note) provide structured supplemental information without excessive verbosity, though the tutorial reference is vague and the emoji formatting slightly unconventional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity of multichain contract verification and the absence of output schema, parameters, and annotations, the description provides minimal viable context (rate limits, domain reference). However, it lacks essential operational details like expected inputs, return format, or the async nature of the verification process.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters (empty properties object), which establishes a baseline score of 4 per evaluation rules. The description mentions 'contract source code' submission conceptually but does not compensate for the empty schema by explaining input requirements or payload structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Submits') and resource ('contract source code') with context ('Etherscan-like explorer for verification'). It effectively distinguishes from the sibling 'check_source_code_verification_status' by emphasizing the submission aspect vs. checking status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it provides crucial rate limit guidance ('limited to 100 verifications/day'), it fails to explicitly state when to use this tool versus the 'check_source_code_verification_status' sibling, or clarify that verification is typically asynchronous requiring subsequent status checks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.