Skip to main content
Glama

query_blockchain_sql

Query Ethereum blockchain data by combining SQL queries and direct dataset downloads in one step. Supports table references or explicit read_parquet() calls for precise data analysis.

Instructions

Download blockchain data and run SQL query in a single step

CONVENIENCE FUNCTION: This combines query_dataset and query_sql into one call.

You can write SQL queries using either approach:
1. Simple table references: "SELECT * FROM blocks LIMIT 10"
2. Explicit read_parquet: "SELECT * FROM read_parquet('/path/to/file.parquet') LIMIT 10"

DATASET-SPECIFIC PARAMETERS:
For datasets that require specific address parameters (like 'balances', 'erc20_transfers', etc.),
ALWAYS use the 'contract' parameter to pass ANY Ethereum address. For example:

- For 'balances' dataset: Use contract parameter for the address you want balances for
  query_blockchain_sql(
      sql_query="SELECT * FROM balances",
      dataset="balances",
      blocks='1000:1010',
      contract='0x123...'  # Address you want balances for
  )

Examples:
```
# Using simple table name
query_blockchain_sql(
    sql_query="SELECT * FROM blocks LIMIT 10",
    dataset="blocks",
    blocks_from_latest=100
)

# Using read_parquet() (the path will be automatically replaced)
query_blockchain_sql(
    sql_query="SELECT * FROM read_parquet('/any/path.parquet') LIMIT 10",
    dataset="blocks",
    blocks_from_latest=100
)
```

ALTERNATIVE WORKFLOW (more control):
If you need more control, you can separate the steps:
1. Download data: result = query_dataset('blocks', blocks_from_latest=100, output_format='parquet')
2. Inspect schema: schema = get_sql_table_schema(result['files'][0])
3. Run SQL query: query_sql("SELECT * FROM blocks", files=result['files'])

Args:
    sql_query: SQL query to execute - using table names or read_parquet()
    dataset: The specific dataset to query (e.g., 'transactions', 'logs', 'balances')
             If None, will be extracted from the SQL query
    blocks: Block range specification as a string (e.g., '1000:1010')
    start_block: Start block number (alternative to blocks)
    end_block: End block number (alternative to blocks)
    use_latest: If True, query the latest block
    blocks_from_latest: Number of blocks before the latest to include
    contract: Contract address to filter by - IMPORTANT: Use this parameter for ALL address-based filtering
      regardless of the parameter name in the native cryo command (address, contract, etc.)
    force_refresh: Force download of new data even if it exists
    include_schema: Include schema information in the result
    
Returns:
    SQL query results and metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
blocksNo
blocks_from_latestNo
contractNo
datasetNo
end_blockNo
force_refreshNo
include_schemaNo
sql_queryYes
start_blockNo
use_latestNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the two SQL query approaches, dataset-specific parameter requirements, and the return format ('SQL query results and metadata'). However, it doesn't mention performance characteristics, rate limits, or error conditions that would be helpful for a complex tool with 10 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, convenience note, SQL approaches, dataset-specific guidance, examples, alternative workflow, and parameter details). While comprehensive, some sections could be more concise - the examples are quite detailed, and the description is lengthy overall for a tool description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 10 parameters, 0% schema coverage, no annotations, and no output schema, the description does an excellent job of providing context. It explains the tool's relationship to siblings, provides usage examples, clarifies parameter semantics, and describes return values. The main gap is lack of information about performance, limits, or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and 10 parameters, the description compensates excellently. It provides detailed explanations for key parameters like 'contract' with specific examples, clarifies parameter relationships (blocks vs start_block/end_block), and explains default behaviors. The 'Args' section adds crucial semantic context beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Download blockchain data and run SQL query in a single step' and explicitly distinguishes it from siblings by naming 'query_dataset' and 'query_sql' as separate tools that this one combines. It specifies the exact functionality and differentiates from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it states this is a 'CONVENIENCE FUNCTION' that combines two other tools, and provides an 'ALTERNATIVE WORKFLOW' section detailing when to use the separate steps for 'more control'. It clearly delineates the trade-offs between convenience and control.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/z80dev/cryo-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server