get_sql_examples
Generate example SQL queries for blockchain datasets using DuckDB. Streamline querying and schema inspection for Ethereum data with workflow tips. Enhance efficiency in blockchain data analysis.
Instructions
Get example SQL queries for different blockchain datasets with DuckDB
SQL WORKFLOW TIPS:
1. First download data: result = query_dataset('dataset_name', blocks='...', output_format='parquet')
2. Inspect schema: schema = get_sql_table_schema(result['files'][0])
3. Run SQL: query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=result['files'])
OR use the combined approach:
- query_blockchain_sql(sql_query="SELECT * FROM read_parquet('...')", dataset='blocks', blocks='...')
Returns:
Dictionary of example queries categorized by dataset type and workflow patterns
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- cryo_mcp/server.py:873-956 (handler)The handler function for the 'get_sql_examples' tool, decorated with @mcp.tool() for registration. It returns a hardcoded dictionary containing categorized example SQL queries for querying blockchain data with DuckDB.def get_sql_examples() -> Dict[str, List[str]]: """ Get example SQL queries for different blockchain datasets with DuckDB SQL WORKFLOW TIPS: 1. First download data: result = query_dataset('dataset_name', blocks='...', output_format='parquet') 2. Inspect schema: schema = get_sql_table_schema(result['files'][0]) 3. Run SQL: query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=result['files']) OR use the combined approach: - query_blockchain_sql(sql_query="SELECT * FROM read_parquet('...')", dataset='blocks', blocks='...') Returns: Dictionary of example queries categorized by dataset type and workflow patterns """ return { "basic_usage": [ "-- Option 1: Simple table names (recommended)", "SELECT * FROM blocks LIMIT 10", "SELECT * FROM transactions LIMIT 10", "SELECT * FROM logs LIMIT 10", "-- Option 2: Using read_parquet() with explicit file paths", "SELECT * FROM read_parquet('/path/to/blocks.parquet') LIMIT 10" ], "transactions": [ "-- Option 1: Simple table reference", "SELECT * FROM transactions LIMIT 10", "SELECT block_number, COUNT(*) as tx_count FROM transactions GROUP BY block_number ORDER BY tx_count DESC LIMIT 10", "-- Option 2: Using read_parquet()", "SELECT from_address, COUNT(*) as sent_count FROM read_parquet('/path/to/transactions.parquet') GROUP BY from_address ORDER BY sent_count DESC LIMIT 10", "SELECT to_address, SUM(value) as total_eth FROM read_parquet('/path/to/transactions.parquet') GROUP BY to_address ORDER BY total_eth DESC LIMIT 10" ], "blocks": [ "SELECT * FROM blocks LIMIT 10", "SELECT block_number, gas_used, transaction_count FROM blocks ORDER BY gas_used DESC LIMIT 10", "SELECT AVG(gas_used) as avg_gas, AVG(transaction_count) as avg_txs FROM blocks" ], "balances": [ "-- IMPORTANT: When querying the balances dataset, use the 'contract' parameter to specify the address", "-- First download the data:", "# result = query_dataset('balances', blocks='15M:15.01M', contract='0x1234...', output_format='parquet')", "-- Then query the data:", "SELECT block_number, address, balance_f64 FROM balances ORDER BY block_number", "SELECT block_number, balance_f64, balance_f64/1e18 as balance_eth FROM balances ORDER BY block_number" ], "logs": [ "SELECT * FROM logs LIMIT 10", "SELECT address, COUNT(*) as event_count FROM logs GROUP BY address ORDER BY event_count DESC LIMIT 10", "SELECT topic0, COUNT(*) as event_count FROM logs GROUP BY topic0 ORDER BY event_count DESC LIMIT 10" ], "joins": [ "-- Join with simple table references", "SELECT t.block_number, COUNT(*) as tx_count, b.gas_used FROM transactions t JOIN blocks b ON t.block_number = b.block_number GROUP BY t.block_number, b.gas_used ORDER BY tx_count DESC LIMIT 10", "-- Join with read_parquet (useful for complex joins)", "SELECT l.block_number, l.address, COUNT(*) as log_count FROM read_parquet('/path/to/logs.parquet') l GROUP BY l.block_number, l.address ORDER BY log_count DESC LIMIT 10" ], "workflow_examples": [ "-- Step 1: Download data with query_dataset", "# result = query_dataset(dataset='blocks', blocks='15000000:15000100', output_format='parquet')", "-- Step 2: Get schema info", "# schema = get_sql_table_schema(result['files'][0])", "-- Step 3: Run SQL query (simple table reference)", "# query_sql(query=\"SELECT * FROM blocks LIMIT 10\", files=result.get('files', []))", "", "-- Or use the combined function", "# query_blockchain_sql(sql_query=\"SELECT * FROM blocks LIMIT 10\", dataset='blocks', blocks='15000000:15000100')" ], "using_dataset_parameters": [ "-- IMPORTANT: How to check required parameters for datasets", "-- Step 1: Look up the dataset to see required parameters", "# dataset_info = lookup_dataset('balances')", "# This will show: 'required parameters: address'", "", "-- Step 2: Use the contract parameter for ANY address parameter", "# For balances dataset, query_dataset('balances', blocks='1M:1.1M', contract='0x1234...')", "# For erc20_transfers, query_dataset('erc20_transfers', blocks='1M:1.1M', contract='0x1234...')", "", "-- Step 3: Always check the dataset description and schema before querying new datasets", "# This helps ensure you're passing the correct parameters" ] }