blockscout-mcp-server
Server Details
Provide AI agents and automation tools with contextual access to blockchain data including balance…
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- blockscout/mcp-server
- GitHub Stars
- 35
- Server Listing
- Blockscout MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
16 toolsdirect_api_callBInspect
Call a raw Blockscout API endpoint for advanced or chain-specific data. Supports pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | Pagination cursor | |
| chain_id | Yes | The ID of the blockchain | |
| query_params | No | Query parameters | |
| endpoint_path | Yes | API path (e.g. /api/v2/stats). No query strings. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden. It mentions pagination support (linking to the cursor parameter) but fails to disclose safety profile (read vs write), return format, error handling patterns, or rate limit implications for raw API access.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. The primary purpose is front-loaded in the first sentence, while the second sentence adds a specific behavioral note about pagination capability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a raw API tool with no output schema and no annotations, the description is insufficient. It omits critical context such as the response format (JSON), authentication requirements, whether the endpoint is restricted to read-only operations, or guidance on Blockscout API versioning.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema—'Supports pagination' reinforces the cursor parameter's purpose, but doesn't elaborate on query parameter formats or endpoint_path construction rules beyond the schema's own example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the verb 'Call' and resource 'raw Blockscout API endpoint', and distinguishes itself from specific siblings by targeting 'advanced or chain-specific data'. However, it could more explicitly position this as an escape-hatch tool when specific query tools are insufficient.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'for advanced or chain-specific data' provides implied usage context suggesting when to use this over specific sibling tools like get_address_info. However, it lacks explicit when-not-to-use guidance or named alternatives, leaving the agent to infer the boundary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_by_ens_nameAInspect
Convert an ENS domain name to its corresponding Ethereum address.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ENS domain name to resolve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the conversion function but omits critical details: failure modes for unregistered names, supported chains (Ethereum mainnet vs. others), whether the operation is read-only, or return format specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is perfectly front-loaded with the action verb, contains zero redundancy, and efficiently communicates the tool's function without extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with full schema coverage, the description is adequate but minimal. It appropriately omits output details (no output schema exists), yet fails to mention error states or chain-specific constraints that would help an agent handle failures.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage ('ENS domain name to resolve'), establishing a baseline of 3. The description reinforces this by mentioning 'ENS domain name' but adds no additional semantic details about format requirements (e.g., .eth suffix) or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Convert') with clear input ('ENS domain name') and output ('Ethereum address'), precisely distinguishing it from sibling tools like get_address_info which work with raw addresses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description clearly indicates the tool transforms ENS names to addresses, it lacks explicit guidance on when to use this versus get_address_info or error handling (e.g., unregistered names). Usage is implied by the clear naming but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_address_infoAInspect
Get comprehensive information about an address including balance, first transaction, ENS, contract status, proxy info, and token details.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Address to get information about | |
| chain_id | Yes | The ID of the blockchain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates well by enumerating the specific data returned (balance, ENS status, contract flags), effectively describing the output behavior. However, it omits explicit read-only safety declarations, rate limiting notes, or authentication requirements that would be essential given the lack of annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action verb. Every word earns its place: 'comprehensive' signals scope, and the enumerated list (balance, first transaction, ENS, etc.) efficiently conveys the tool's capabilities without redundancy. Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description effectively compensates by listing the specific return data fields. For a 2-parameter tool with 100% schema coverage, this provides sufficient context for invocation, though it could enhance completeness by noting error behaviors or data freshness guarantees.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for both parameters ('Address to get information about', 'The ID of the blockchain'), establishing a baseline of 3. The description mentions 'address' in context but doesn't add syntactic details (e.g., checksum format, chain_id values) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('address information'), listing exact data points returned (balance, first transaction, ENS, contract status, proxy info, token details). This scope differentiation implicitly distinguishes it from siblings like get_tokens_by_address or get_transactions_by_address, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The term 'comprehensive information' implies this tool is for broad address overviews rather than specific queries, suggesting implied usage patterns. However, it lacks explicit guidance on when to use this versus narrower siblings (e.g., get_tokens_by_address for token-only lookups) or prerequisites like address format requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_block_infoAInspect
Get block information like timestamp, gas used, burnt fees, transaction count etc. Can optionally include transaction hashes.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | Yes | The ID of the blockchain | |
| number_or_hash | Yes | Block number or hash | |
| include_transactions | No | Include transaction hashes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully enumerates the data fields returned (timestamp, gas, fees, count), which helps the agent understand the output. However, it fails to explicitly declare the read-only/safe nature of the operation or mention any rate limits, auth requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first front-loads the core functionality and return value examples; the second addresses the optional behavior. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description compensates well by listing specific return fields (timestamp, gas used, etc.). However, it could be improved by explicitly stating this is a read-only query operation and providing brief context on parameter formats (e.g., hex vs decimal for block hash).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (chain_id, number_or_hash, include_transactions are all documented). The description reinforces the include_transactions parameter behavior but does not add semantic value beyond the schema, such as expected formats for chain_id or block hash syntax. Baseline 3 is appropriate given the schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves block information and lists specific data points (timestamp, gas used, burnt fees, transaction count), effectively distinguishing it from sibling tools like get_block_number (which likely returns only the number) and get_transaction_info (which targets transactions). It loses one point for not explicitly contrasting with get_block_number despite the similar naming.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the optional transaction hash inclusion ('Can optionally include transaction hashes'), providing usage guidance for that specific parameter. However, it lacks explicit when-to-use guidance comparing this tool to siblings like direct_api_call or get_block_number, forcing the agent to infer from the listed return fields.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_block_numberAInspect
Get the block number and timestamp for a specific date/time or the latest block.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | Yes | The ID of the blockchain | |
| datetime | No | ISO 8601 date/time to find the block for. If omitted, returns latest block. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context that the tool returns both block number and timestamp (not obvious from the name alone), but omits details about read-only safety, error conditions, or rate limiting that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action ('Get') and immediately clarifies the scope. There is no redundant or wasted language; every word serves to define the tool's functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (two primitive parameters, no nested objects) and complete schema coverage, the description is appropriately scoped. It compensates for the missing output schema by disclosing that timestamps are returned alongside block numbers, though it could further clarify the return data structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline score of 3. The description mentions the datetime parameter's optional behavior, but this largely echoes the schema documentation rather than adding new semantic meaning about parameter usage or format constraints beyond the ISO 8601 specification already present.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves block numbers and timestamps using specific verbs and resources. It clarifies the dual functionality (lookup by datetime vs. latest block), though it doesn't explicitly differentiate from sibling `get_block_info` which likely queries by block identifier rather than time.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance by explaining that omitting the datetime parameter returns the latest block. However, it lacks explicit guidance on when to use this versus `get_block_info` or other blockchain query tools in the sibling set.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_chains_listAInspect
Get the list of known blockchain chains with their IDs. Useful for getting a chain ID when the chain name is known.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds the qualifier 'known' (implying a curated/predefined list of chains) and specifies that IDs are returned with the chains. However, it omits details about the response structure, caching behavior, or whether this requires authentication, which would be helpful given the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two sentences, both earning their place: the first declares the function, the second declares the use case. No redundancy or filler text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter lookup tool, the description is appropriately complete. It explains what is returned (chains with IDs) and why one would use it. The absence of output schema is a minor gap, but the description adequately compensates by mentioning the key data returned (IDs) for this simple utility function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the evaluation rules, zero parameters establishes a baseline score of 4. The description appropriately does not fabricate parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'the list of known blockchain chains with their IDs' using specific verb (Get) and resource. While it doesn't explicitly contrast with siblings like direct_api_call, the specific scope (listing available chains vs querying specific addresses/blocks) is inherently clear from the description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence 'Useful for getting a chain ID when the chain name is known' provides a concrete use case that implies when to use the tool. However, it lacks explicit guidance on when NOT to use it (e.g., when you already have the chain ID) or alternatives for chain discovery.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_contract_abiAInspect
Get smart contract ABI (Application Binary Interface). Required for formatting function calls or interpreting contract data.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Smart contract address | |
| chain_id | Yes | The ID of the blockchain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully explains the semantic purpose and value of the ABI (formatting calls/interpreting data), but omits operational details like return format, whether the contract must be verified, error handling, or network requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. The first defines the resource being fetched; the second explains the utility/value proposition. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a two-parameter retrieval tool, explaining what is fetched and why. However, given the lack of output schema, it could benefit from mentioning the return format (e.g., JSON ABI specification) or validation requirements (e.g., contract verification status).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for both parameters ('Smart contract address', 'The ID of the blockchain'), establishing a baseline of 3. The description adds no explicit parameter guidance, but the schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a smart contract ABI and explains its utility ('formatting function calls or interpreting contract data'). However, it does not explicitly distinguish from siblings like `read_contract` or `inspect_contract_code`, which could confuse the agent about whether this tool executes calls or returns interface definitions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Required for formatting function calls' provides implied guidance on when to use the tool (as a prerequisite for contract interaction), but lacks explicit when-not-to-use guidance or named alternatives. It does not clarify whether to use this versus `read_contract` for accessing contract data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tokens_by_addressAInspect
Get ERC20 token holdings for an address with metadata and market data. Supports pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | Pagination cursor from previous response | |
| address | Yes | Wallet address | |
| chain_id | Yes | The ID of the blockchain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It successfully indicates the return payload includes 'metadata and market data' and signals pagination behavior. However, it omits safety profile confirmation (though implied by 'Get'), rate limits, cache behavior, or error scenarios for invalid addresses.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence front-loads the core purpose (ERC20 holdings) and return value (metadata/market data). The second sentence adds the pagination capability. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and lack of output schema, the description adequately compensates by specifying the return data types (metadata, market data) and pagination support. It appropriately distinguishes from NFT and transfer tools. Could improve by noting whether zero-balance tokens are returned or if balances include raw/decimal values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all three parameters: chain_id, address, cursor are documented). With complete schema coverage, the baseline score is 3. The description does not add parameter-specific semantics beyond what the schema provides, but none are needed given the comprehensive schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get ERC20 token holdings for an address with metadata and market data.' The explicit mention of 'ERC20' distinguishes it from sibling tool 'nft_tokens_by_address', while 'holdings' distinguishes it from 'get_token_transfers_by_address'. The verb 'Get' is specific and the resource scope is precisely defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes 'Supports pagination,' which provides a usage hint for handling large token sets. However, it lacks explicit when-to-use guidance comparing it to alternatives like 'get_token_transfers_by_address' (balances vs history) or 'get_address_info' (token-specific vs general). The ERC20 specification implicitly guides selection away from the NFT tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_token_transfers_by_addressBInspect
Get ERC-20 token transfers for an address within a time range. Supports pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| token | No | Token contract address filter | |
| age_to | No | End date/time | |
| cursor | No | Pagination cursor | |
| address | Yes | Address to query | |
| age_from | Yes | Start date/time (ISO 8601) | |
| chain_id | Yes | The ID of the blockchain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Supports pagination' but lacks critical details: whether it returns both incoming/outgoing transfers, if failed transactions are included, rate limits, maximum time range limits, or the structure of returned data. The agent must infer safety (read-only) from the verb 'Get' alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. The first sentence establishes the core function and constraints; the second notes pagination support. Information is front-loaded with the specific resource type (ERC-20) mentioned immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 100% schema coverage and no output schema, the description covers the basic operation adequately. However, it lacks explanation of return values (crucial without output schema), doesn't clarify the difference between token transfers and native transactions, and omits behavioral constraints. Sufficient for basic selection but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds 'ERC-20' context not explicitly in the schema (which just mentions 'Token contract address filter'), and implies the pagination mechanism. It doesn't add syntax details for the ISO 8601 format or explain cursor construction beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'ERC-20 token transfers' (specific resource) 'for an address within a time range' (scope). The 'ERC-20' specificity helps distinguish from sibling 'nft_tokens_by_address' and 'get_transactions_by_address'. However, it doesn't explicitly differentiate from 'get_tokens_by_address' (likely balances vs. transfers), preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like 'get_transactions_by_address' (native currency) or 'get_tokens_by_address' (token balances). It mentions pagination support but doesn't explain when pagination is necessary or what cursor values look like.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transaction_infoBInspect
Get comprehensive transaction information including decoded input, token transfers, and fee breakdown.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | Yes | The ID of the blockchain | |
| transaction_hash | Yes | Transaction hash | |
| include_raw_input | No | Include raw transaction input data |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. While it discloses what data is returned (decoded input, token transfers, fee breakdown), it fails to declare if this is a safe read-only operation, idempotency, rate limits, or error handling (e.g., invalid hash behavior).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, dense sentence of 12 words. Information is front-loaded with the verb, and every clause earns its place by specifying distinct data categories returned. No redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter lookup tool with no output schema, the description partially compensates by listing return value categories (decoded input, transfers, fees). However, lacking annotations and behavioral details (safety, errors), it is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema adequately documents all parameters. The description adds minimal semantic value beyond the schema, though it implicitly clarifies that 'decoded input' is the default while 'raw input' is optional via the parameter. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('transaction information') and distinguishes from siblings like 'get_transactions_by_address' by emphasizing specific data types returned (decoded input, token transfers, fee breakdown). However, it lacks explicit differentiation regarding when to use single-transaction lookup versus batch queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'get_transactions_by_address' (which queries by address) or prerequisites like needing a transaction hash from prior operations. No exclusions or failure modes mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transactions_by_addressAInspect
Get native currency transfers and smart contract interactions for an address. EXCLUDES token transfers. Requires age_from. Supports pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| age_to | No | End date/time (ISO 8601) | |
| cursor | No | Pagination cursor | |
| address | Yes | Address to query | |
| methods | No | Method signature filter (e.g. 0x304e6ade) | |
| age_from | Yes | Start date/time (ISO 8601) | |
| chain_id | Yes | The ID of the blockchain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable behavioral context: 'EXCLUDES token transfers' defines scope limits, 'Supports pagination' indicates cursor behavior. However, lacks disclosure on return format, rate limits, or data freshness given the absence of annotations and output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four short sentences with zero waste. Front-loaded with core action ('Get native currency transfers...'), followed by critical exclusions and constraints. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 100% schema coverage and no output schema, description adequately covers scope (native vs contract), exclusions (tokens), and constraints (age_from). Could improve by describing return structure since no output schema exists, but captures essential operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. Description adds emphasis that age_from is required (though schema indicates this) and implies cursor usage via pagination mention, but does not add significant semantic detail beyond the well-documented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resources 'native currency transfers and smart contract interactions'. Explicitly states 'EXCLUDES token transfers' which effectively distinguishes it from the sibling tool get_token_transfers_by_address.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-not guidance via 'EXCLUDES token transfers' and notes pagination support. States 'Requires age_from' indicating a prerequisite. Lacks explicit naming of the sibling alternative (get_token_transfers_by_address) to use instead for token data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
inspect_contract_codeAInspect
Inspect a verified contract's source code or metadata. If file_name is omitted, returns metadata and file list.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Smart contract address | |
| chain_id | Yes | The ID of the blockchain | |
| file_name | No | Source file to inspect. If omitted returns metadata. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully communicates the critical constraint that only 'verified' contracts can be inspected and explains the dual-mode return behavior (metadata/file list vs. source code). However, it omits rate limits, error handling for unverified contracts, and output format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient, front-loaded sentences with zero redundancy. The first establishes the core purpose, while the second provides essential conditional logic regarding the optional parameter. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and absence of an output schema, the description adequately covers the tool's functionality for a read-only inspection operation. It compensates for missing annotations by noting the 'verified' requirement. Could be improved by describing the output structure or error cases, but sufficient for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by clarifying that omitting file_name returns both 'metadata and file list' (the schema only mentions 'metadata'), providing important semantic context for the optional parameter's behavior that isn't fully captured in the property description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Inspect') and target resource ('verified contract's source code or metadata'), distinguishing it from sibling tools like get_contract_abi (which returns ABI) and read_contract (which reads state). However, it lacks explicit cross-references to these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance by explaining the conditional behavior when file_name is omitted versus provided. However, it lacks explicit guidance on when to use this tool versus siblings like get_contract_abi or direct_api_call, and does not mention prerequisites (e.g., contract must be verified).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_token_by_symbolCInspect
Search for token addresses by symbol or name. Returns potential matches.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Token symbol or name to search | |
| chain_id | Yes | The ID of the blockchain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It only adds that the tool 'Returns potential matches,' hinting at possible ambiguity/multiple results, but fails to state read-only safety, rate limits, or response structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at two sentences. The first sentence front-loads the core function. The second sentence ('Returns potential matches') earns its place by hinting at result cardinality, though it remains vague on content structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite simple parameters with full schema coverage, the absence of an output schema creates a significant gap. The description only notes 'potential matches' without specifying what data is returned (e.g., contract address, decimals, name), leaving the agent unprepared for the response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing complete documentation for both 'symbol' and 'chain_id' parameters. The description restates the search capability but does not add syntax details, examples, or format constraints (e.g., chain_id format) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool searches for 'token addresses by symbol or name,' specifying both the action and target resource. The phrase 'by symbol or name' implicitly distinguishes it from sibling address-based lookup tools like get_tokens_by_address, though it does not explicitly name them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this tool versus alternatives (e.g., 'use when you know the symbol but not the contract address'). The description implies usage through the parameter names but lacks when/when-not criteria or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nft_tokens_by_addressAInspect
Get NFT tokens (ERC-721, ERC-404, ERC-1155) owned by an address, grouped by collection. Supports pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | Pagination cursor | |
| address | Yes | NFT owner address | |
| chain_id | Yes | The ID of the blockchain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full disclosure burden. It successfully communicates key behaviors: results are 'grouped by collection' and it 'Supports pagination'. However, it omits read-only safety confirmation, rate limits, error cases (e.g., invalid address), or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads core functionality, standards, and grouping behavior. Second sentence adds pagination capability. Every word earns its place; no redundant phrases or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and absence of output schema, description adequately covers tool purpose and key behavioral traits (grouping, pagination). However, for a blockchain query tool with no annotations, it could improve by briefly noting return structure or error handling patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (chain_id, address, cursor all documented), establishing baseline of 3. Description mentions 'pagination' (aligning with cursor) and 'owned by an address' (aligning with address), but does not add syntax details, example values, or constraints beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with resource 'NFT tokens', explicitly lists token standards (ERC-721, ERC-404, ERC-1155), and specifies 'grouped by collection' output format. This clearly distinguishes it from sibling tool 'get_tokens_by_address' (likely for fungible tokens) by scope and return structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Specifies NFT token standards which implies usage context (use when querying non-fungible assets), but lacks explicit comparison to siblings like 'get_tokens_by_address' or guidance on when to use pagination versus filtering. No prerequisites or error conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_contractAInspect
Call a smart contract view/pure function via eth_call and return the result. Requires the function ABI.
| Name | Required | Description | Default |
|---|---|---|---|
| abi | No | JSON ABI for the function being called | |
| args | No | JSON array of arguments | [] |
| block | No | Block number or 'latest' | latest |
| address | Yes | Smart contract address | |
| chain_id | Yes | The ID of the blockchain | |
| function_name | Yes | Function name matching the ABI |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the read-only nature via 'view/pure' and 'eth_call', and mentions the return of results. However, it lacks details on error handling (e.g., revert behavior), return value formatting (decoded vs raw), or rate limiting considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence where every clause earns its place: the first clause defines the operation, the second specifies the mechanism and return, and the third states the critical prerequisite. No redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (6 parameters, no annotations, no output schema), the description covers the essential operational contract but leaves gaps. It does not describe the return value structure despite the absence of an output schema, nor does it explain error conditions or the implications of the block parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description adds semantic value by emphasizing the 'view/pure' constraint on function_name and highlighting the ABI requirement, though it doesn't elaborate on the block parameter's historical query capabilities or args formatting beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Call'), resource ('smart contract view/pure function'), and mechanism ('via eth_call'). It effectively distinguishes this from state-changing operations and sibling tools like get_contract_abi by specifying 'view/pure' and 'eth_call', though it doesn't explicitly contrast with direct_api_call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage constraints by specifying 'view/pure function' (indicating read-only use) and stating the prerequisite 'Requires the function ABI'. However, it lacks explicit guidance on when to use this versus direct_api_call or alternatives for state-changing operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
__unlock_blockchain_analysis__AInspect
Unlocks access to other MCP tools. All tools remain locked with a 'Session Not Initialized' error until this function is called. Returns essential rules for blockchain data interactions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It discloses the side effect (unlocks other tools) and return value content ('Returns essential rules for blockchain data interactions'). Minor gap: does not specify if operation is idempotent or if unlock expires.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second establishes prerequisite constraint and return value. Front-loaded with critical gatekeeper information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for a zero-parameter initialization tool lacking output schema; explains the locking mechanism and what the function returns. Could improve by clarifying session persistence or whether tool must be called once or repeatedly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. Description correctly implies no arguments required for unlocking. Baseline 4 applies for zero-parameter tools per scoring rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states 'Unlocks access to other MCP tools' with specific verb (unlocks) and resource (access). It clearly distinguishes this initialization/gatekeeper tool from data-retrieval siblings like get_address_info and get_transaction_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisite requirement: 'All tools remain locked with a 'Session Not Initialized' error until this function is called.' This provides clear when-to-use guidance (call first) and the specific error condition that signals the need for this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.