Skip to main content
Glama

Server Details

Ethereum MCP: Chainlink feeds, gas, ERC-20, ENS, ABI, contract calls, ERC-8004, EAS.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tenzro/tenzro-network
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 16 of 16 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose, covering different aspects of Ethereum interaction such as balances, transactions, gas, ENS, contracts, and agent registry. No significant overlap exists.

Naming Consistency5/5

All tools follow a consistent 'eth_verb_noun' pattern with underscores, such as 'eth_get_balance', 'eth_encode_function', and 'eth_lookup_ens'. The naming is uniform and predictable.

Tool Count5/5

The 16 tools cover a broad range of Ethereum operations without being overwhelming. The scope is well-suited for a general-purpose Ethereum MCP server.

Completeness3/5

While the set covers many read operations and building calldata for agent registration, it lacks a tool to actually send signed transactions (e.g., eth_sendRawTransaction), creating a dead end for state-changing actions. Missing log retrieval as well.

Available Tools

16 tools
eth_call_contractAInspect

Execute a read-only eth_call against a smart contract. Params: to (contract address), data (hex-encoded calldata), block (default 'latest'). Returns the raw hex result. Use eth_encode_function to build calldata from a function signature and arguments.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesContract address (hex, with or without 0x prefix)
dataYesHex-encoded calldata (with or without 0x prefix)
blockNoBlock parameter ('latest', 'earliest', 'pending', or hex block number). Default: 'latest'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must fully convey behavioral traits. While it notes the call is read-only, it omits details on error handling, rate limits, authentication needs, or whether gas is consumed. For a tool with no annotations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: two sentences cover purpose, parameters, and usage hint. No unnecessary words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions 'Returns the raw hex result,' which is adequate. However, it lacks details on possible errors, block parameter constraints, or that eth_call is free. It is minimally complete for a simple tool but could be more thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and each parameter is well-documented in schema. The description merely paraphrases these (e.g., 'Params: to (contract address)'). It adds no new semantic information beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Execute a read-only eth_call against a smart contract' and lists key parameters (to, data, block). It also references a sibling tool (eth_encode_function) for building calldata, clearly distinguishing its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description advises using eth_encode_function to build calldata, indicating when this tool is appropriate. It could be more explicit about when not to use it (e.g., for state-changing calls), but the guidance is clear and relevant.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_encode_functionAInspect

ABI-encode a function call. Computes the 4-byte selector from the canonical function signature via Keccak-256, then left-pads each argument to 32 bytes. Returns the complete hex-encoded calldata ready for eth_call or a transaction. Example: function_sig='transfer(address,uint256)', args=['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xde0b6b3a7640000'].

ParametersJSON Schema
NameRequiredDescriptionDefault
argsYesArguments as JSON array of hex-encoded values (each will be left-padded to 32 bytes). E.g. ['0xRecipientAddress', '0xde0b6b3a7640000']
function_sigYesCanonical function signature (e.g. 'transfer(address,uint256)', 'approve(address,uint256)', 'balanceOf(address)')
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: it describes Keccak-256 hashing, left-padding of arguments to 32 bytes, and return of hex-encoded calldata. An example is provided. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences: purpose, process, and example. It is concise and front-loaded with the main action. Every sentence adds value, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description states the return value (hex-encoded calldata). It also explains the process sufficiently for an agent to understand inputs and output. The tool has only 2 parameters and no nested objects, so the description covers all needed context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes both parameters. The description adds an example but does not provide additional semantics beyond what the schema offers. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'ABI-encode a function call' and explains the process: computing 4-byte selector, padding arguments, returning hex-encoded calldata. This distinguishes it from sibling tools like eth_call_contract, which executes the call rather than encoding it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. While it implies usage for preparing calldata for eth_call or transaction, it lacks guidance on when not to use it or mention of sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_estimate_gasAInspect

Estimate gas required for a transaction via eth_estimateGas. Params: to (required), from/data/value (optional). Returns estimated gas in decimal and hex.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesRecipient/contract address (hex, with or without 0x prefix)
dataNoHex-encoded calldata (with or without 0x prefix)
fromNoSender address (hex, with or without 0x prefix)
valueNoValue in wei (hex string, e.g. '0xde0b6b3a7640000' for 1 ETH)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description mentions the return format (decimal and hex), which adds value beyond the input schema. However, with no annotations provided, it lacks explicit statements about safety (e.g., read-only nature) or potential side effects. The word 'estimate' implies no state change, but direct disclosure is absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally concise, using two sentences to cover purpose, parameters, and return format. It is front-loaded with the core action, making it efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description explains the return format (decimal and hex). It covers the required parameter and optional ones adequately. It could mention that the estimate is approximate, but overall it provides sufficient context for a simple estimation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description contributes minimal new meaning beyond restating which parameters are required or optional. While it summarizes effectively, it does not elaborate on parameter interactions or format nuances beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'estimate' and the resource 'gas required for a transaction', directly naming the underlying JSON-RPC method. It also lists parameters, distinguishing it from siblings like eth_get_gas_price or eth_get_fee_history, which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage for estimating gas before a transaction, it does not explicitly state when to use this tool versus alternatives like eth_get_gas_price or provide context such as prerequisites or typical workflows. A more explicit guideline would improve clarity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_attestationAInspect

Query an attestation from Ethereum Attestation Service (EAS) by UID. Posts a GraphQL query to the EAS indexer at easscan.org. Returns attester, recipient, schema, data, timestamp, revocation status, and decoded data when available.

ParametersJSON Schema
NameRequiredDescriptionDefault
uidYesAttestation UID (bytes32 hex, with or without 0x prefix)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that it posts a GraphQL query to an external site (easscan.org) and lists return fields (attester, recipient, schema, etc.). No annotations provided, but description compensates with implementation details. Lacks mention of any side effects or permissions, but for a read query this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second adds implementation details and return fields. No unnecessary words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with one parameter and no output schema. Description covers purpose, input format, implementation (GraphQL to EAS indexer), and key return fields. Complete for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter (uid) with schema description specifying format (bytes32 hex, with/without 0x). Tool description does not add further parameter context. Since schema coverage is 100%, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it queries an attestation by UID, specifying the exact action and resource. Distinguishes from sibling tools (e.g., eth_get_balance, eth_get_transaction) which deal with different blockchain data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes what the tool does and mentions it queries an external indexer, but does not explicitly state when to use or avoid it, nor mention alternatives. No siblings directly overlap, so implicit differentiation is acceptable but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_balanceBInspect

Get the ETH balance of an address via eth_getBalance. Returns balance in wei, Gwei, and ETH.

ParametersJSON Schema
NameRequiredDescriptionDefault
blockNoBlock parameter ('latest', 'earliest', 'pending', or hex block number). Default: 'latest'
addressYesEthereum address (hex, with or without 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries the burden. It discloses return format (wei, Gwei, ETH) but no other behavioral traits. Adequate for a simple read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded key information, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains return format in three units. Enough for a simple balance check; no missing information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so schema already explains parameters. Description adds no additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get the ETH balance of an address' with specific return units. However, it does not distinguish from sibling tools like eth_get_token_balance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, or any prerequisites. Only implies usage for ETH balance checks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_blockAInspect

Get block by number via eth_getBlockByNumber. Params: block_number (hex or 'latest'), full_transactions (bool, default false). Returns block header, transactions, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
block_numberNoBlock number as hex string (e.g. '0x10d4f') or tag ('latest', 'earliest', 'pending'). Default: 'latest'
full_transactionsNoIf true, return full transaction objects instead of just hashes. Default: false
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It only states that the tool 'returns block header, transactions, and metadata' without mentioning safety, idempotency, or potential side effects. As a read operation, it's safe, but this is not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise—two sentences that front-load the purpose. Every sentence adds information without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers parameters and return values at a high level but lacks detail on the structure of the returned block header and metadata. Since there is no output schema, more detail would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions, but the description adds value by clarifying parameter types (e.g., 'hex or latest') and defaults. It complements the schema without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get block by number via eth_getBlockByNumber.' It specifies the resource (block) and action (get), distinguishing it from sibling tools like eth_get_transaction or eth_get_balance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by listing parameters but does not provide explicit guidance on when to use this tool versus alternatives. No 'when not to use' or comparison with siblings is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_fee_historyAInspect

Get fee history for recent blocks via eth_feeHistory. Returns base fees per gas, gas used ratios, and reward percentiles for EIP-1559 gas estimation.

ParametersJSON Schema
NameRequiredDescriptionDefault
block_countNoNumber of blocks to return (default '5')
newest_blockNoNewest block ('latest' by default, or hex block number)
reward_percentilesNoReward percentiles as JSON array of floats (e.g. [25.0, 50.0, 75.0])
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must carry full disclosure. It mentions the returned data types but does not disclose whether the operation is read-only, any authentication requirements, rate limits, or error conditions. For a tool that likely reads blockchain data, the description is insufficient on behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the purpose and include key details. No extraneous information; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists the three return elements (base fees, gas used ratios, reward percentiles) but does not detail their structure or formatting. Given the parameter count is 3 and all are optional, the description is moderately complete but could explain defaults or output format more clearly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with each parameter having a description. The tool description does not add additional meaning beyond what the schema already provides. It mentions reward percentiles, but that is already in the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get fee history for recent blocks', identifies the underlying method 'eth_feeHistory', and lists the specific return data (base fees, gas used ratios, reward percentiles). This distinguishes it from sibling tools like eth_get_gas_price (current price) and eth_estimate_gas (estimate for a transaction).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for 'EIP-1559 gas estimation' but does not explicitly state when to use this tool versus alternatives, nor does it mention any exclusions or prerequisites. The context provides siblings, but the description lacks direct comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_gas_priceAInspect

Get the current gas price from the Ethereum network via eth_gasPrice JSON-RPC. Returns the gas price in wei, Gwei, and hex.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description explains output format. Does not disclose potential errors or clarify that it's a read-only operation, assuming that's obvious for a gas price getter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no unnecessary information. Perfectly sized for a simple getter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with no output schema, the description is fairly complete: explains input (none) and output format. Minor omission: no mention of potential errors or single return vs. repeated calls.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters; schema coverage is 100%. Description adds value by detailing the return values in three units, which is beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it gets current gas price via eth_gasPrice and returns in wei, Gwei, and hex. Could better differentiate from siblings like eth_get_fee_history but is specific enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like eth_get_fee_history or eth_get_price. Implies it's for current gas price only, but lacks explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_priceAInspect

Get token price from a Chainlink AggregatorV3Interface data feed via eth_call to latestRoundData(). Default feed: ETH/USD on mainnet (0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419). Returns price with 8 decimal precision, round ID, and update timestamps.

ParametersJSON Schema
NameRequiredDescriptionDefault
chain_idNoChain ID — 1 for mainnet (default), used to select RPC endpoint
feed_addressNoChainlink AggregatorV3Interface data feed address (default: ETH/USD 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations exist, the description carries full burden. It explains the call to latestRoundData(), the return format (price with 8 decimals, round ID, timestamps), and the default feed. However, it does not explicitly state that this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences cover purpose, mechanism, default feed, and return format. No redundant information. Front-loaded with the key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple price getter without an output schema, the description adequately covers inputs, default, and output fields. It could mention error cases or gas costs, but overall complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds value by explaining the feed_address default and that chain_id selects the RPC endpoint, but this is minimal enhancement over the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get token price' using a specific Chainlink data feed via latestRoundData(). It distinguishes the tool from siblings like eth_call_contract by specifying the exact use case and default feed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied but not explicit. The description tells what it does but does not mention when to use it over alternatives such as eth_call_contract for custom calls. No when-not-to-use guidance provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_token_balanceAInspect

Get ERC-20 token balance for an address via eth_call to balanceOf(address). Params: token_address (ERC-20 contract), owner_address. Returns raw balance (caller must divide by 10^decimals for human-readable amount).

ParametersJSON Schema
NameRequiredDescriptionDefault
owner_addressYesOwner address to check balance for (hex, with or without 0x prefix)
token_addressYesERC-20 token contract address (hex, with or without 0x prefix)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must cover behavior fully. It mentions the raw balance return and the need to divide by decimals, but does not disclose that it is a read-only call, potential gas costs, or error handling. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences and 30 words, efficiently conveying purpose, parameters, and output handling. No filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations or output schema, the description covers the main functionality, parameters, and post-processing. Missing error conditions or how to obtain decimals, but sufficient for a straightforward tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are already documented. The description adds value by specifying that token_address is an ERC-20 contract and owner_address is the holder, and by noting the output must be divided by decimals. This goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves an ERC-20 token balance via eth_call to balanceOf, specifying both parameters. It is distinct from siblings like eth_get_balance (ETH) and eth_call_contract (generic).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for ERC-20 token balances but does not explicitly state when to use it over alternatives (e.g., eth_get_balance for ETH or eth_call_contract for custom calls). No when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_transactionAInspect

Get transaction details by hash via eth_getTransactionByHash. Returns sender, recipient, value, gas, input data, block number, and nonce.

ParametersJSON Schema
NameRequiredDescriptionDefault
tx_hashYesTransaction hash (hex, with or without 0x prefix)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full burden for behavioral disclosure. It lists return fields but does not declare read-only status, error handling (e.g., null for missing tx), or rate limits. Basic transparency is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first identifies action and method, second lists key return fields. No redundant information. Highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one parameter, no output schema), the description covers the primary return fields. It could mention edge cases like null returns, but is otherwise adequate for a straightforward lookup. Annotation absence lowers completeness slightly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter tx_hash, which already describes the format (hex with/without 0x). The tool description adds no further parameter guidance beyond restating the parameter's purpose. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get transaction details by hash') and specifies the JSON-RPC method. It lists the returned fields (sender, recipient, etc.), distinguishing it from siblings like eth_get_transaction_receipt. The verb+resource is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. The description implies usage for retrieving transaction details by hash, but lacks exclusionary context or alternatives. Agents must infer from sibling names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_get_transaction_receiptAInspect

Get transaction receipt by hash via eth_getTransactionReceipt. Returns status (0x0=failure, 0x1=success), gas used, logs, contract address (if deployment), and block info.

ParametersJSON Schema
NameRequiredDescriptionDefault
tx_hashYesTransaction hash (hex, with or without 0x prefix)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the receipt fields including status encoding and contract address condition. However, it does not mention that the receipt may not be available immediately after submission (requires mining) or any other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Purpose is front-loaded, and return fields are listed efficiently. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with high schema coverage and no output schema, the description adequately covers purpose, parameters, and return fields. It could mention that the transaction must be mined, but overall it is complete enough for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter tx_hash, which is fully documented in the schema. The description does not add additional meaning beyond the schema (e.g., format or constraints). Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get transaction receipt by hash'), references the standard Ethereum method, and lists the key return fields (status, gas used, logs, contract address, block info). This distinguishes it from siblings like eth_get_transaction which returns transaction details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a transaction receipt is needed, but provides no explicit guidance on when to use this tool versus alternatives like eth_get_transaction (which also uses tx_hash). No exclusions, prerequisites, or sibling comparisons are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_lookup_agent_8004AInspect

Look up an AI agent in the ERC-8004 Agent Registry by agent ID (uint256) or owner address. Builds the calldata for getAgent(uint256) or getAgentsByOwner(address) that can be used with eth_call_contract against a deployed ERC-8004 registry.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_id_or_addressYesAgent ID (uint256 hex) or owner address (hex) to look up
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool builds calldata rather than executing a query, and specifies two underlying functions (getAgent(uint256), getAgentsByOwner(address)), providing useful behavioral context beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with front-loaded purpose and clear mechanism; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter tool and the context of sibling tools (eth_call_contract mentioned), the description is sufficiently complete, though it could explicitly state that it returns hex-encoded calldata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already describes the parameter with 100% coverage, so the description adds minimal extra meaning; baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool looks up an AI agent by ID or owner address in the ERC-8004 registry, distinguishing it from siblings like eth_call_contract and eth_resolve_ens.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage as a calldata builder for eth_call_contract but does not explicitly state when to use it versus alternatives or provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_lookup_ensAInspect

Reverse-lookup an Ethereum address to its ENS name via the Universal Resolver on-chain. Constructs .addr.reverse and calls resolve(). Falls back to OnchainKit ENS API. Params: address (hex).

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesEthereum address to reverse-lookup (hex, with or without 0x prefix)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explains the on-chain lookup via Universal Resolver, construction of reverse record, and fallback to API. While it doesn't mention failure modes (e.g., no ENS name set), the read-only nature is clear from the description.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first states the core function, second explains the underlying mechanism and fallback. No extraneous information, well-structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only lookup with one parameter, the description is largely complete. It explains the process and fallback. However, without an output schema, mentioning the return value (ENS name or null) would improve completeness, though the tool's simplicity makes this a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema description already specifies the parameter format (hex, with/without 0x). The tool description adds only 'Params: address (hex)', which is redundant. No significant additional meaning beyond the schema is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs a reverse-lookup of an Ethereum address to its ENS name, specifying the mechanism (Universal Resolver) and fallback API. This distinguishes it from siblings like eth_resolve_ens (forward lookup) and eth_lookup_agent_8004.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for reverse resolution (address to name) by naming and explaining the process, but it does not explicitly state when to use versus alternatives. However, the context and sibling names provide sufficient implicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_register_agent_8004AInspect

Build transaction data for registering an AI agent via ERC-8004 Agent Registry. ERC-8004 defines an on-chain registry for autonomous AI agents with capabilities, metadata URI, and owner tracking. Returns the ABI-encoded function selector and parameter breakdown for registerAgent(string,string[],string). The caller must sign and submit the transaction to the registry contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_nameYesHuman-readable agent name
capabilitiesYesList of capability strings (e.g. ['nlp', 'code-generation', 'web-search'])
metadata_uriYesIPFS or HTTPS URI pointing to full agent metadata JSON
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full transparency burden. It discloses that the tool returns ABI-encoded function selector and parameter breakdown, and that the caller must sign and submit. This covers the key behavioral traits (non-destructive, output format, required action) but could be more explicit about permissions or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no extraneous information. The first sentence states purpose and standard, the second describes the output and required action. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters and no output schema, the description covers purpose, function signature, and next step. It does not specify the registry contract address or how to obtain it, but overall it's sufficient for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so baseline is 3. The description adds value by explaining the context (ERC-8004 standard, registerAgent function signature) and confirming that agent_name, capabilities, metadata_uri map to the function parameters. This enriches the schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool builds transaction data for registering an AI agent via ERC-8004. Specifies the verb ('Build transaction data'), resource ('AI agent'), and standard ('ERC-8004 Agent Registry'). Differentiates from siblings like eth_call_contract and eth_encode_function by being specific to the ERC-8004 registration process.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for registration but provides no explicit guidance on when to use this tool vs alternatives. It mentions the caller must sign and submit, which is a next step, but lacks context about prerequisites or when not to use. A clear comparison with sibling tools would improve this.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

eth_resolve_ensBInspect

Resolve an ENS name to an Ethereum address. Tries the ENS Universal Resolver on-chain via eth_call (resolve(bytes,bytes) at 0xc0497E381f536Be9ce14B0dD3817cBcAe57d2F62). Falls back to the OnchainKit ENS API as a secondary source. Params: name (e.g. 'vitalik.eth').

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesENS name to resolve (e.g. 'vitalik.eth')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description reveals the resolution method (on-chain eth_call with fallback API) and parameter format, but lacks details on output (return value), error handling, or potential costs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences) and front-loaded with the main purpose. Every sentence adds value, though the second sentence could be integrated into the first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no output schema and no annotations, the description fails to specify the return format or behavior on failure (e.g., unresolved name). This leaves the agent without critical usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so the description adds minimal value beyond repeating the parameter description. The example 'vitalik.eth' is helpful but not substantive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool resolves an ENS name to an Ethereum address, specifying the method (on-chain via contract with fallback to API). It distinguishes from siblings like eth_lookup_ens by detailing the resolution approach.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternative siblings such as eth_lookup_ens. The description does not specify prerequisites or context for appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.