mcp-server
Server Details
MCP server for Blockscout
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- blockscout/mcp-server
- GitHub Stars
- 36
- Server Listing
- Blockscout MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 16 of 16 tools scored. Lowest: 3.4/5.
Most tools have clearly distinct purposes targeting specific blockchain data types (addresses, blocks, transactions, tokens, contracts). However, some overlap exists between get_address_info and other tools like get_tokens_by_address or nft_tokens_by_address, which could cause confusion about where to find specific address-related data. The descriptions help clarify boundaries, but an agent might need to read carefully to choose correctly.
The naming follows a consistent verb_noun pattern with snake_case throughout (e.g., get_address_info, get_block_info, read_contract). The only deviation is direct_api_call (which uses 'call' rather than 'get') and __unlock_blockchain_analysis__ (which uses double underscores and 'unlock' verb). These minor inconsistencies don't significantly hinder readability but break perfect consistency.
With 16 tools, the count is well-scoped for a comprehensive blockchain analysis server. Each tool serves a distinct purpose in the domain (address analysis, block/transaction queries, token/NFT data, contract interactions, and chain management). The tools cover the essential surface area without being overwhelming or sparse.
The tool set provides complete coverage for blockchain data analysis. It includes core operations for addresses, blocks, transactions, tokens (ERC-20 and NFTs), contracts (ABI, code inspection, read functions), and chain management. The direct_api_call tool also allows for advanced or chain-specific queries, ensuring no gaps in the surface for the server's purpose.
Available Tools
16 toolsdirect_api_callDirect Blockscout API CallARead-onlyInspect
Call a raw Blockscout API endpoint for advanced or chain-specific data.
Do not include query strings in ``endpoint_path``; pass all query parameters via
``query_params`` to avoid double-encoding.
**SUPPORTS PAGINATION**: If response includes 'pagination' field,
use the provided next_call to get additional pages.
Returns:
ToolResponse[Any]: Must return ToolResponse[Any] (not ToolResponse[BaseModel])
because specialized handlers can return lists or other types that don't inherit
from BaseModel. The dispatcher system supports flexible data structures.| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | The pagination cursor from a previous response to get the next page of results. | |
| chain_id | Yes | The ID of the blockchain | |
| query_params | No | Optional query parameters forwarded to the Blockscout API. | |
| endpoint_path | Yes | The Blockscout API path to call (e.g., '/api/v2/stats'); do not include query strings. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations. Annotations indicate read-only, non-destructive, and open-world characteristics. The description supplements this with pagination support details ('If response includes 'pagination' field, use the provided next_call'), query parameter handling guidance ('Do not include query strings in endpoint_path'), and return type specifics about ToolResponse[Any].
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections: purpose statement, parameter usage guidance, pagination support, and return type explanation. Every sentence serves a distinct purpose with zero wasted words, and critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity as a raw API caller, the description provides excellent contextual completeness. It covers purpose, parameter usage patterns, pagination behavior, and return type considerations. With annotations covering safety aspects and an output schema existing, the description focuses appropriately on usage guidance rather than repeating structured information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds some parameter guidance by explaining how to use 'endpoint_path' and 'query_params' together ('pass all query parameters via query_params to avoid double-encoding'), but doesn't provide additional meaning for 'chain_id' or 'cursor' beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Call a raw Blockscout API endpoint for advanced or chain-specific data.' It specifies the verb ('call') and resource ('Blockscout API endpoint'), but doesn't explicitly differentiate it from sibling tools like 'get_address_info' or 'get_block_info' that likely provide more structured access to specific endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'for advanced or chain-specific data.' It implies this is for cases where specialized sibling tools don't cover the needed functionality, though it doesn't explicitly name alternatives or state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_by_ens_nameGet Address by ENS NameARead-onlyInspect
Useful for when you need to convert an ENS domain name (e.g. "blockscout.eth") to its corresponding Ethereum address.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ENS domain name to resolve |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate it's read-only, non-destructive, and open-world, so the description adds minimal behavioral context beyond the conversion action. It doesn't disclose additional traits like rate limits, error handling, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose with an example, containing no redundant information and earning its place clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity, rich annotations (read-only, open-world), and the presence of an output schema, the description is complete enough. It succinctly covers the core functionality without needing to explain parameters or return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents the 'name' parameter. The description adds an example ('blockscout.eth') but no additional semantic meaning beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('convert') and resource ('ENS domain name'), and it distinguishes from siblings by focusing on ENS-to-address resolution, unlike other tools that handle blocks, transactions, contracts, or tokens.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool ('when you need to convert an ENS domain name'), but it does not explicitly mention when not to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_infoGet Address InformationARead-onlyInspect
Get comprehensive information about an address, including:
- Address existence check
- Native token (ETH) balance (provided as is, without adjusting by decimals)
- First transaction details (block number and timestamp) for age calculation
- ENS name association (if any)
- Contract status (whether the address is a contract, whether it is verified)
- Proxy contract information (if applicable): determines if a smart contract is a proxy contract (which forwards calls to implementation contracts), including proxy type and implementation addresses
- Token details (if the contract is a token): name, symbol, decimals, total supply, etc.
Essential for address analysis, contract investigation, token research, and DeFi protocol analysis.| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Address to get information about | |
| chain_id | Yes | The ID of the blockchain |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and open-world behavior. The description adds valuable context beyond annotations: it specifies that the ETH balance is provided 'as is, without adjusting by decimals' and details what proxy contract information includes (type and implementation addresses). It also clarifies the scope of 'comprehensive information' with a bulleted list, though it doesn't mention rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear opening sentence followed by a bulleted list of information returned and a concluding usage context sentence. It is appropriately sized and front-loaded, though the bulleted list is somewhat detailed; every sentence earns its place by specifying the tool's comprehensive nature.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (providing multiple types of address information), rich annotations (readOnlyHint, openWorldHint, destructiveHint), and the presence of an output schema, the description is complete enough. It thoroughly outlines what information is returned without needing to explain return values (handled by output schema) or safety aspects (covered by annotations).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the two parameters (address and chain_id). The description does not add any meaning beyond what the schema provides—it doesn't explain parameter formats, constraints, or examples. Baseline 3 is appropriate when the schema handles all parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get comprehensive information about an address') and enumerates the exact types of information returned (existence check, balance, transaction details, ENS name, contract status, proxy info, token details). It distinguishes itself from sibling tools like 'get_address_by_ens_name' or 'get_transactions_by_address' by providing a holistic overview rather than focused queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Essential for address analysis, contract investigation, token research, and DeFi protocol analysis'), providing clear context. However, it does not explicitly mention when NOT to use it or name specific alternatives among sibling tools (e.g., using 'get_transactions_by_address' for transaction history instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_block_infoGet Block InformationARead-onlyInspect
Get block information like timestamp, gas used, burnt fees, transaction count etc. Can optionally include the list of transaction hashes contained in the block. Transaction hashes are omitted by default; request them only when you truly need them, because on high-traffic chains the list may exhaust the context.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | Yes | The ID of the blockchain | |
| number_or_hash | Yes | Block number or hash | |
| include_transactions | No | If true, includes a list of transaction hashes from the block. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, indicating a safe read operation. The description adds valuable behavioral context about the performance implications of including transaction hashes ('may exhaust the context on high-traffic chains'), which goes beyond what annotations provide. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two focused sentences. The first sentence establishes the core purpose, and the second provides crucial usage guidance for the optional parameter. Every word earns its place with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that annotations cover safety aspects (read-only, non-destructive), schema coverage is 100%, and an output schema exists (so return values don't need description), the description provides exactly what's needed: clear purpose, important behavioral caveats about transaction hashes, and usage guidance. No gaps remain for this type of read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all three parameters. The description adds some context about the include_transactions parameter ('omitted by default', 'request them only when you truly need them'), but doesn't provide additional semantic meaning beyond what's in the schema descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('Get') and resource ('block information'), listing specific data fields like timestamp, gas used, burnt fees, and transaction count. It distinguishes from sibling tools like get_block_number (which only gets block number) and get_transaction_info (which focuses on individual transactions).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the optional transaction hashes feature ('request them only when you truly need them') and when not to use it ('omitted by default', 'may exhaust the context on high-traffic chains'). It also implicitly distinguishes from get_transaction_info by focusing on block-level data rather than transaction details.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_block_numberGet Block NumberARead-onlyInspect
Retrieves the block number and timestamp for a specific date/time or the latest block.
Use when you need a block height for a specific point in time (e.g., "block at 2024-01-01")
or the current chain tip. If `datetime` is provided, finds the block immediately
preceding that time. If omitted, returns the latest indexed block.| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | Yes | The ID of the blockchain | |
| datetime | No | The date and time (ISO 8601 format, e.g. 2025-05-22T23:00:00.00Z) to find the block for. If omitted, returns the latest block. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable behavioral context: 'finds the block immediately preceding that time' clarifies the lookup logic, and 'latest indexed block' indicates potential indexing delays. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first states purpose, second gives usage guidelines, third explains parameter behavior. Front-loaded with key information, appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a read-only lookup tool with good annotations and an output schema. The description covers purpose, usage, and parameter behavior adequately. With annotations covering safety and output schema handling return values, no significant gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds minor semantic context: it explains the effect of providing vs. omitting datetime, but doesn't add syntax or format details beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'retrieves' and the resource 'block number and timestamp', specifying it works for a specific date/time or the latest block. It distinguishes from siblings like 'get_block_info' by focusing on block height/timestamp rather than full block details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use when you need a block height for a specific point in time... or the current chain tip.' It provides clear alternatives: if datetime is provided vs. omitted, and distinguishes from other block-related tools by its specific use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_chains_listGet List of ChainsARead-onlyInspect
Get the list of known blockchain chains with their IDs. Useful for getting a chain ID when the chain name is known. This information can be used in other tools that require a chain ID to request information.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, openWorldHint=true, and destructiveHint=false, indicating safe, read-only access with open-world assumptions. The description adds value by explaining the tool's output use ('chain ID when the chain name is known' and for 'other tools that require a chain ID'), but does not disclose additional behavioral traits like rate limits, auth needs, or data freshness. With annotations covering core safety, this is adequate but not rich in extra context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured in two sentences: the first states the purpose, and the second explains usage. Every sentence adds value without waste, making it front-loaded and efficient for an AI agent to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, annotations covering safety, and an output schema present), the description is complete enough. It explains what the tool does and how its output is used, without needing to detail return values or behavioral nuances, as those are handled by structured fields. This suffices for a simple read-only list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the tool's purpose and usage. This aligns with the baseline for zero parameters, where the description compensates by providing clear semantic context without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the list of known blockchain chains with their IDs.' It specifies the verb ('Get') and resource ('list of known blockchain chains'), but does not explicitly differentiate it from sibling tools like 'direct_api_call' or 'get_block_info', which might also involve blockchain data retrieval. This makes it clear but not fully sibling-distinctive.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Useful for getting a chain ID when the chain name is known. This information can be used in other tools that require a chain ID to request information.' This indicates when to use the tool (to obtain chain IDs for other operations) but does not specify when not to use it or name explicit alternatives among siblings, such as distinguishing it from 'get_block_info' for chain-specific data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_contract_abiGet Contract ABIARead-onlyInspect
Get smart contract ABI (Application Binary Interface). An ABI defines all functions, events, their parameters, and return types. The ABI is required to format function calls or interpret contract data.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Smart contract address | |
| chain_id | Yes | The ID of the blockchain |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and openWorldHint=true, indicating a safe, read-only operation with potential for unknown data. The description adds valuable context by explaining what an ABI is and its purpose in formatting calls and interpreting data, which helps the agent understand the tool's role beyond basic safety. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: the first states the purpose, the second defines ABI, and the third explains its utility. Each sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 parameters), rich annotations (readOnlyHint, openWorldHint, destructiveHint), and the presence of an output schema, the description is complete enough. It explains the tool's purpose and the ABI's role, which, combined with structured data, provides sufficient context for an agent to use the tool effectively without needing details on return values or advanced behaviors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters: 'Smart contract address' and 'The ID of the blockchain.' The description doesn't add any parameter-specific information beyond the schema, but it provides general context about ABI retrieval. Given the high schema coverage, a baseline score of 3 is appropriate as the description doesn't compensate but doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get smart contract ABI (Application Binary Interface)' and explains what an ABI is. It distinguishes from siblings by focusing specifically on ABI retrieval rather than other blockchain data like transactions, blocks, or tokens. However, it doesn't explicitly differentiate from 'inspect_contract_code' which might be a related sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'The ABI is required to format function calls or interpret contract data,' suggesting this tool should be used when needing contract interface details. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'read_contract' or 'inspect_contract_code,' nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tokens_by_addressGet Tokens by AddressARead-onlyInspect
Get comprehensive ERC20 token holdings for an address with enriched metadata and market data.
Returns detailed token information including contract details (name, symbol, decimals), market metrics (exchange rate, market cap, volume), holders count, and actual balance (provided as is, without adjusting by decimals).
Essential for portfolio analysis, wallet auditing, and DeFi position tracking.
**SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | The pagination cursor from a previous response to get the next page of results. | |
| address | Yes | Wallet address | |
| chain_id | Yes | The ID of the blockchain |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and open-world behavior. The description adds valuable context beyond annotations: it explains pagination behavior ('SUPPORTS PAGINATION' with instructions), clarifies that balances are provided 'as is, without adjusting by decimals', and mentions the inclusion of enriched metadata and market data. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with four sentences: the core purpose, return details, usage context, and pagination note. Each sentence adds distinct value without redundancy, and key information (like pagination support) is emphasized. It is appropriately sized and front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (ERC20 token analysis with metadata), rich annotations (read-only, open-world), and the presence of an output schema, the description is complete enough. It covers purpose, return data types, usage scenarios, and pagination behavior, without needing to detail return values since an output schema exists. It adequately complements the structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the three parameters (chain_id, address, cursor). The description does not add any parameter-specific semantics beyond what the schema provides, such as format examples or constraints. It mentions pagination generally but does not elaborate on the cursor parameter beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get comprehensive ERC20 token holdings for an address') and distinguishes it from siblings like 'nft_tokens_by_address' (which handles NFTs) and 'get_token_transfers_by_address' (which handles transfers rather than holdings). It specifies the resource (ERC20 tokens) and scope (by address with enriched metadata).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Essential for portfolio analysis, wallet auditing, and DeFi position tracking'), but does not explicitly state when not to use it or name specific alternatives among siblings. It implies usage for ERC20 token holdings analysis rather than other token types or data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_transfers_by_addressGet Token Transfers by AddressARead-onlyInspect
Get ERC-20 token transfers for an address within a specific time range.
Use cases:
- `get_token_transfers_by_address(address, age_from)` - get all transfers of any ERC-20 token to/from the address since the given date up to the current time
- `get_token_transfers_by_address(address, age_from, age_to)` - get all transfers of any ERC-20 token to/from the address between the given dates
- `get_token_transfers_by_address(address, age_from, age_to, token)` - get all transfers of the given ERC-20 token to/from the address between the given dates
**SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.| Name | Required | Description | Default |
|---|---|---|---|
| token | No | An ERC-20 token contract address to filter transfers by a specific token. If omitted, returns transfers of all tokens. | |
| age_to | No | End date and time (e.g 2025-05-22T22:30:00.00Z). Can be omitted to get all transfers up to the current time. | |
| cursor | No | The pagination cursor from a previous response to get the next page of results. | |
| address | Yes | Address which either transfer initiator or transfer receiver | |
| age_from | Yes | Start date and time (e.g 2025-05-22T23:00:00.00Z). | |
| chain_id | Yes | The ID of the blockchain |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. Annotations indicate read-only, non-destructive, and open-world characteristics, but the description adds: 1) pagination behavior with specific implementation details, 2) that it returns transfers 'to/from' the address (bidirectional), and 3) the time range filtering logic. It doesn't contradict annotations (which correctly describe it as read-only), and it provides practical implementation guidance that annotations alone wouldn't convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement followed by specific use cases and pagination guidance. Every sentence earns its place: the first sentence establishes core functionality, the bulleted examples demonstrate parameter combinations, and the pagination note is essential for correct usage. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, time-based filtering, pagination), the description provides complete contextual information. With annotations covering safety (read-only, non-destructive), an output schema presumably defining the response structure, and 100% schema coverage for parameters, the description fills the remaining gaps by explaining usage patterns and pagination behavior. This creates a comprehensive understanding for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it mentions the 'token' parameter filters by specific token and shows usage patterns, but doesn't provide additional meaning for parameters like 'chain_id' or 'cursor' that aren't already clear from the schema. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get ERC-20 token transfers for an address within a specific time range.' It specifies the resource (ERC-20 token transfers), the target (an address), and the scope (time range). It distinguishes from siblings like 'get_transactions_by_address' (which likely includes non-token transactions) and 'get_tokens_by_address' (which likely returns token balances rather than transfers).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage examples with three different parameter combinations, showing when to use optional parameters like 'age_to' and 'token'. It also includes pagination guidance ('If response includes 'pagination' field...'), which helps the agent understand how to handle large result sets. While it doesn't explicitly mention when NOT to use this tool, the examples effectively demonstrate its intended use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_infoGet Transaction InformationARead-onlyInspect
Get comprehensive transaction information.
Unlike standard eth_getTransactionByHash, this tool returns enriched data including decoded input parameters, detailed token transfers with token metadata, transaction fee breakdown (priority fees, burnt fees) and categorized transaction types.
By default, the raw transaction input is omitted if a decoded version is available to save context; request it with `include_raw_input=True` only when you truly need the raw hex data.
Essential for transaction analysis, debugging smart contract interactions, tracking DeFi operations.| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | Yes | The ID of the blockchain | |
| transaction_hash | Yes | Transaction hash | |
| include_raw_input | No | If true, includes the raw transaction input data. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and open-world characteristics, the description specifies that raw transaction input is omitted by default to save context, explains when to request it, and details what enriched data is returned (decoded parameters, token metadata, fee breakdown). This provides practical implementation guidance that annotations alone don't convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with four sentences that each serve distinct purposes: stating the core function, detailing enriched features, explaining parameter behavior, and providing usage context. There's no wasted text, and important information is front-loaded while still including necessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of comprehensive annotations (readOnlyHint, openWorldHint, destructiveHint), 100% schema coverage, and an output schema (implied by context signals), the description provides excellent contextual completeness. It covers the tool's enhanced capabilities, practical usage considerations, and behavioral nuances without needing to repeat what structured fields already communicate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds minimal parameter-specific information beyond the schema - it only mentions the include_raw_input parameter's purpose and default behavior. This meets the baseline expectation when schema coverage is complete, but doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get comprehensive transaction information' with specific details about what it returns (decoded input parameters, token transfers with metadata, fee breakdown, categorized types). It distinguishes itself from 'standard eth_getTransactionByHash' by highlighting enriched data, making its unique value proposition explicit compared to potential alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Essential for transaction analysis, debugging smart contract interactions, tracking DeFi operations') and includes specific usage advice for the include_raw_input parameter ('request it only when you truly need the raw hex data'). It also implicitly distinguishes from sibling tools by focusing on transaction-level analysis rather than address or block-level operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transactions_by_addressGet Transactions by AddressARead-onlyInspect
Retrieves native currency transfers and smart contract interactions (calls, internal txs) for an address.
**EXCLUDES TOKEN TRANSFERS**: Filters out direct token balance changes (ERC-20, etc.). You'll see calls *to* token contracts, but not the `Transfer` events. For token history, use `get_token_transfers_by_address`.
A single tx can have multiple records from internal calls; use `internal_transaction_index` for execution order.
Requires an `age_from` date to scope results for performance and relevance.
Use cases:
- `get_transactions_by_address(address, age_from)` - get all txs to/from the address since a given date.
- `get_transactions_by_address(address, age_from, age_to)` - get all txs to/from the address between given dates.
- `get_transactions_by_address(address, age_from, age_to, methods)` - get all txs to/from the address between given dates, filtered by method.
**SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.| Name | Required | Description | Default |
|---|---|---|---|
| age_to | No | End date and time (e.g 2025-05-22T22:30:00.00Z). | |
| cursor | No | The pagination cursor from a previous response to get the next page of results. | |
| address | Yes | Address which either sender or receiver of the transaction | |
| methods | No | A method signature to filter transactions by (e.g 0x304e6ade) | |
| age_from | Yes | Start date and time (e.g 2025-05-22T23:00:00.00Z). | |
| chain_id | Yes | The ID of the blockchain |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations already declare readOnlyHint=true and destructiveHint=false, the description adds valuable behavioral context: it explains pagination behavior ('SUPPORTS PAGINATION'), clarifies that single transactions can have multiple records, mentions performance considerations ('Requires an age_from date to scope results for performance'), and provides execution order guidance ('use internal_transaction_index for execution order'). This goes significantly beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections: core functionality, exclusions, sibling tool reference, execution details, performance rationale, use cases, and pagination. Every sentence adds value - there's no redundant information or fluff. The bold formatting for key points enhances scannability without adding unnecessary length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, read-only operation with open world data), the description provides excellent context. It covers what data is included/excluded, when to use alternatives, pagination behavior, execution order details, performance considerations, and multiple use cases. With annotations covering safety and an output schema presumably documenting return values, this description provides comprehensive contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond what's in the schema - it mentions that age_from is required 'for performance and relevance' and shows example parameter combinations, but doesn't provide additional meaning for individual parameters. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'retrieves native currency transfers and smart contract interactions for an address' with specific exclusions ('EXCLUDES TOKEN TRANSFERS'), distinguishing it from sibling tool 'get_token_transfers_by_address'. It provides a precise verb+resource+scope combination that leaves no ambiguity about what data is returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when NOT to use this tool ('EXCLUDES TOKEN TRANSFERS') and provides a clear alternative ('For token history, use get_token_transfers_by_address'). It also includes multiple concrete use case examples with parameter combinations, giving clear guidance on when different parameter sets are appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
inspect_contract_codeInspect Contract CodeBRead-onlyInspect
Inspects a verified contract's source code or metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | The address of the smart contract. | |
| chain_id | Yes | The ID of the blockchain. | |
| file_name | No | The name of the source file to inspect. If omitted, returns contract metadata and the list of source files. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, and open-world, so the description adds minimal behavioral context. It does specify that it works on 'verified' contracts, which implies a prerequisite not covered by annotations, but doesn't elaborate on what verification entails or any rate limits/authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary elaboration. Every word contributes directly to understanding the tool's function, making it optimally concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations (covering safety) and an output schema (handling return values), the description is reasonably complete for its complexity. It could be improved by clarifying the 'verified' prerequisite or differentiating from siblings, but it adequately complements the structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters. The description adds marginal value by hinting at the optional 'file_name' behavior ('If omitted, returns contract metadata and the list of source files'), but this is largely redundant with the schema's description field.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('inspects') and target ('verified contract's source code or metadata'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_contract_abi' or 'read_contract', which also interact with contracts, so it doesn't reach the highest clarity level.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'get_contract_abi' and 'read_contract' available, there's no indication of when inspecting source code/metadata is preferred over retrieving ABI or reading contract state, leaving usage context ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_token_by_symbolLookup Token by SymbolARead-onlyInspect
Search for token addresses by symbol or name. Returns multiple potential
matches based on symbol or token name similarity. Only the first
``TOKEN_RESULTS_LIMIT`` matches from the Blockscout API are returned.| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Token symbol or name to search for | |
| chain_id | Yes | The ID of the blockchain |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, openWorldHint=true, and destructiveHint=false, covering safety and scope. The description adds valuable behavioral context: it discloses that results are limited ('Only the first TOKEN_RESULTS_LIMIT matches'), sourced from a specific API ('Blockscout API'), and based on similarity matching ('potential matches... based on similarity'), which aren't covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by key behavioral details (result nature, limit, API source). It consists of three concise sentences with no wasted words, each adding distinct value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 parameters), rich annotations (covering safety and scope), and the presence of an output schema (which handles return values), the description is complete. It effectively supplements structured data by explaining the search behavior, result limits, and data source, leaving no significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters ('symbol' and 'chain_id'). The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints, so it meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search for token addresses') and resources ('by symbol or name'), distinguishing it from siblings like 'get_tokens_by_address' (which searches by address rather than symbol/name) and 'get_address_by_ens_name' (which deals with ENS names rather than token symbols).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'Returns multiple potential matches based on symbol or token name similarity,' suggesting it's for fuzzy searching. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_tokens_by_address' or provide any exclusions or prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nft_tokens_by_addressGet NFT Tokens by AddressARead-onlyInspect
Retrieve NFT tokens (ERC-721, ERC-404, ERC-1155) owned by an address, grouped by collection.
Provides collection details (type, address, name, symbol, total supply, holder count) and individual token instance data (ID, name, description, external URL, metadata attributes).
Essential for a detailed overview of an address's digital collectibles and their associated collection data.
**SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | The pagination cursor from a previous response to get the next page of results. | |
| address | Yes | NFT owner address | |
| chain_id | Yes | The ID of the blockchain |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and open-world behavior. The description adds valuable context beyond annotations by specifying the data structure (collection details and token instance data) and explicitly disclosing pagination support with instructions on how to handle it ('use the provided next_call'), which is not covered by annotations and aids in understanding tool behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the first sentence stating the core purpose, followed by details on data structure and pagination. Every sentence earns its place by adding specific value, and the bolded pagination note is efficiently highlighted without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (NFT retrieval with grouping), rich annotations (read-only, open-world), and the presence of an output schema, the description is complete enough. It covers purpose, data structure, and pagination behavior, and with the output schema handling return values, no additional explanation is needed for a comprehensive understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents parameters. The description does not add meaning beyond the schema, as it does not explain parameter semantics, usage, or constraints. The baseline score of 3 is appropriate since the schema carries the full burden of parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve NFT tokens'), target resource ('owned by an address'), and scope ('grouped by collection'). It distinguishes from sibling tools like 'get_tokens_by_address' by specifying NFT types (ERC-721, ERC-404, ERC-1155) and collection grouping, making the purpose unambiguous and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Essential for a detailed overview of an address's digital collectibles and their associated collection data'), but does not explicitly state when not to use it or name alternatives. It implies usage for NFT-focused queries versus general token queries, though lacks explicit exclusions or comparisons to siblings like 'get_tokens_by_address'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_contractRead from ContractARead-onlyInspect
Calls a smart contract function (view/pure, or non-view/pure simulated via eth_call) and returns the
decoded result.
This tool provides a direct way to query the state of a smart contract.
Example:
To check the USDT balance of an address on Ethereum Mainnet, you would use the following arguments:
{
"tool_name": "read_contract",
"params": {
"chain_id": "1",
"address": "0xdAC17F958D2ee523a2206206994597C13D831ec7",
"abi": {
"constant": true,
"inputs": [{"name": "_owner", "type": "address"}],
"name": "balanceOf",
"outputs": [{"name": "balance", "type": "uint256"}],
"payable": false,
"stateMutability": "view",
"type": "function"
},
"function_name": "balanceOf",
"args": "["0xF977814e90dA44bFA03b6295A0616a897441aceC"]"
}
}| Name | Required | Description | Default |
|---|---|---|---|
| abi | Yes | The JSON ABI for the specific function being called. This should be a dictionary that defines the function's name, inputs, and outputs. The function ABI can be obtained using the `get_contract_abi` tool. | |
| args | No | A JSON string containing an array of arguments. Example: "["0xabc..."]" for a single address argument, or "[]" for no arguments. Order and types must match ABI inputs. Addresses: use 0x-prefixed strings; Numbers: prefer integers (not quoted); numeric strings like "1" are also accepted and coerced to integers. Bytes: keep as 0x-hex strings. | [] |
| block | No | The block identifier to read the contract state from. Can be a block number (e.g., 19000000) or a string tag (e.g., 'latest'). Defaults to 'latest'. | latest |
| address | Yes | Smart contract address | |
| chain_id | Yes | The ID of the blockchain | |
| function_name | Yes | The symbolic name of the function to be called. This must match the `name` field in the provided ABI. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable context: it specifies the tool calls view/pure functions or simulates non-view/pure ones via eth_call, which clarifies behavioral traits beyond annotations, such as the simulation capability and state query focus.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The example is detailed but necessary for clarity. However, the example could be more concise, and some phrasing (e.g., 'direct way to query') is slightly redundant, though overall structure is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, nested objects), rich annotations (readOnlyHint, openWorldHint), and the presence of an output schema, the description is complete. It covers purpose, usage context, behavioral details (simulation via eth_call), and includes a practical example, providing sufficient guidance without needing to explain return values due to the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description includes an example that illustrates parameter usage (e.g., chain_id, address, abi, function_name, args) but does not add significant semantic details beyond what the schema provides, such as explaining parameter interactions or edge cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Calls a smart contract function (view/pure, or non-view/pure simulated via eth_call) and returns the decoded result.' It specifies the verb ('Calls'), resource ('smart contract function'), and scope ('view/pure, or non-view/pure simulated via eth_call'), distinguishing it from siblings like direct_api_call or get_contract_abi by focusing on contract state queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: 'This tool provides a direct way to query the state of a smart contract.' It implies usage for contract state queries but does not explicitly state when not to use it or name alternatives among siblings, such as for non-query operations or other blockchain data retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
__unlock_blockchain_analysis__Unlock Blockchain AnalysisARead-onlyInspect
Unlocks access to other MCP tools.
All tools remain locked with a "Session Not Initialized" error until this
function is successfully called. Skipping this explicit initialization step
will cause all subsequent tool calls to fail.
MANDATORY FOR AI AGENTS: The returned instructions contain ESSENTIAL rules
that MUST govern ALL blockchain data interactions. Failure to integrate these
rules will result in incorrect data retrieval, tool failures and invalid
responses. Always apply these guidelines when planning queries, processing
responses or recommending blockchain actions.
COMPREHENSIVE DATA SOURCES: Provides an extensive catalog of specialized
blockchain endpoints to unlock sophisticated, multi-dimensional blockchain
investigations across all supported networks.| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The main data payload of the tool's response. |
| notes | No | A list of important contextual notes, such as warnings about data truncation or data quality issues. |
| pagination | No | Pagination information, present only if the 'data' is a single page of a larger result set. |
| content_text | No | Optional human-readable summary used for MCP content responses. |
| instructions | No | A list of suggested follow-up actions or instructions for the LLM to plan its next steps. |
| data_description | No | A list of notes explaining the structure, fields, or conventions of the 'data' payload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, and non-destructive behavior. The description adds valuable context: it explains the initialization requirement, that it returns essential rules for blockchain interactions, and provides a catalog of data sources. This goes beyond what annotations provide without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat verbose with some redundancy (e.g., emphasizing mandatory nature multiple times). However, it's well-structured with clear sections about purpose, requirements, and data sources. Some sentences could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, comprehensive annotations, and an output schema exists, the description provides complete context. It explains the initialization requirement, behavioral expectations, and data scope without needing to cover parameters or return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, maintaining focus on the tool's purpose and behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'unlocks access to other MCP tools' and explains it's required for initialization. It specifies the verb ('unlocks') and resource ('access to other MCP tools'), though it doesn't explicitly differentiate from siblings beyond implying it's a prerequisite for them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: it's 'MANDATORY FOR AI AGENTS' and must be called first to avoid 'Session Not Initialized' errors for all other tools. It clearly states when to use it (before any other tool) and the consequences of skipping it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.