Skip to main content
Glama

Cryo MCP Server

by z80dev

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
ETH_RPC_URLYesDefault Ethereum RPC URL to use when not specified via command line

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
list_datasets

Return a list of all available cryo datasets

query_dataset
Download blockchain data and return the file paths where the data is stored. IMPORTANT WORKFLOW NOTE: When running SQL queries, use this function first to download data, then use the returned file paths with query_sql() to execute SQL on those files. Example workflow for SQL: 1. First download data: result = query_dataset('transactions', blocks='1000:1010', output_format='parquet') 2. Get file paths: files = result.get('files', []) 3. Run SQL query: query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=files) DATASET-SPECIFIC PARAMETERS: For datasets that require specific address parameters (like 'balances', 'erc20_transfers', etc.), ALWAYS use the 'contract' parameter to pass ANY Ethereum address. For example: - For 'balances' dataset: Use contract parameter for the address you want balances for query_dataset('balances', blocks='1000:1010', contract='0x123...') - For 'logs' or 'erc20_transfers': Use contract parameter for contract address query_dataset('logs', blocks='1000:1010', contract='0x123...') To check what parameters a dataset requires, always use lookup_dataset() first: lookup_dataset('balances') # Will show required parameters Args: dataset: The name of the dataset to query (e.g., 'logs', 'transactions', 'balances') blocks: Block range specification as a string (e.g., '1000:1010') start_block: Start block number as integer (alternative to blocks) end_block: End block number as integer (alternative to blocks) use_latest: If True, query the latest block blocks_from_latest: Number of blocks before the latest to include (e.g., 10 = latest-10 to latest) contract: Contract address to filter by - IMPORTANT: Use this parameter for ALL address-based filtering regardless of the parameter name in the native cryo command (address, contract, etc.) output_format: Output format (json, csv, parquet) - use 'parquet' for SQL queries include_columns: Columns to include alongside the defaults exclude_columns: Columns to exclude from the defaults Returns: Dictionary containing file paths where the downloaded data is stored
lookup_dataset
Look up a specific dataset and return detailed information about it. IMPORTANT: Always use this function before querying a new dataset to understand its required parameters and schema. The returned information includes: 1. Required parameters for the dataset (IMPORTANT for datasets like 'balances' that need an address) 2. Schema details showing available columns and data types 3. Example queries for the dataset When the dataset requires specific parameters like 'address' (for 'balances'), ALWAYS use the 'contract' parameter in query_dataset() to pass these values. Example: For 'balances' dataset, lookup_dataset('balances') will show it requires an 'address' parameter. You should then query it using: query_dataset('balances', blocks='1000:1010', contract='0x1234...') Args: name: The name of the dataset to look up sample_start_block: Optional start block for sample data (integer) sample_end_block: Optional end block for sample data (integer) use_latest_sample: If True, use the latest block for sample data sample_blocks_from_latest: Number of blocks before the latest to include in sample Returns: Detailed information about the dataset including schema and available fields
get_transaction_by_hash
Get detailed information about a transaction by its hash Args: tx_hash: The transaction hash to look up Returns: Detailed information about the transaction
get_latest_ethereum_block
Get information about the latest Ethereum block Returns: Information about the latest block including block number
query_sql
Run a SQL query against downloaded blockchain data files IMPORTANT WORKFLOW: This function should be used after calling query_dataset to download data. Use the file paths returned by query_dataset as input to this function. Workflow steps: 1. Download data: result = query_dataset('transactions', blocks='1000:1010', output_format='parquet') 2. Get file paths: files = result.get('files', []) 3. Execute SQL using either: - Direct table references: query_sql("SELECT * FROM transactions", files=files) - Or read_parquet(): query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=files) To see the schema of a file, use get_sql_table_schema(file_path) before writing your query. DuckDB supports both approaches: 1. Direct table references (simpler): "SELECT * FROM blocks" 2. read_parquet function (explicit): "SELECT * FROM read_parquet('/path/to/file.parquet')" Args: query: SQL query to execute - can use simple table names or read_parquet() files: List of parquet file paths to query (typically from query_dataset results) include_schema: Whether to include schema information in the result Returns: Query results and metadata
list_available_sql_tables
List all available parquet files that can be queried with SQL USAGE NOTES: - This function lists parquet files that have already been downloaded - Each file can be queried using read_parquet('/path/to/file.parquet') in your SQL - For each file, this returns the file path, dataset type, and other metadata - Use these file paths in your SQL queries with query_sql() Returns: List of available files and their metadata
get_sql_table_schema
Get the schema and sample data for a specific parquet file WORKFLOW NOTE: Use this function to explore the structure of parquet files before writing SQL queries against them. This will show you: 1. All available columns and their data types 2. Sample data from the file 3. Total row count Usage example: 1. Get list of files: files = list_available_sql_tables() 2. For a specific file: schema = get_sql_table_schema(files[0]['path']) 3. Use columns in your SQL: query_sql("SELECT column1, column2 FROM read_parquet('/path/to/file.parquet')") Args: file_path: Path to the parquet file (from list_available_sql_tables or query_dataset) Returns: Table schema information including columns, data types, and sample data
query_blockchain_sql
Download blockchain data and run SQL query in a single step CONVENIENCE FUNCTION: This combines query_dataset and query_sql into one call. You can write SQL queries using either approach: 1. Simple table references: "SELECT * FROM blocks LIMIT 10" 2. Explicit read_parquet: "SELECT * FROM read_parquet('/path/to/file.parquet') LIMIT 10" DATASET-SPECIFIC PARAMETERS: For datasets that require specific address parameters (like 'balances', 'erc20_transfers', etc.), ALWAYS use the 'contract' parameter to pass ANY Ethereum address. For example: - For 'balances' dataset: Use contract parameter for the address you want balances for query_blockchain_sql( sql_query="SELECT * FROM balances", dataset="balances", blocks='1000:1010', contract='0x123...' # Address you want balances for ) Examples: ``` # Using simple table name query_blockchain_sql( sql_query="SELECT * FROM blocks LIMIT 10", dataset="blocks", blocks_from_latest=100 ) # Using read_parquet() (the path will be automatically replaced) query_blockchain_sql( sql_query="SELECT * FROM read_parquet('/any/path.parquet') LIMIT 10", dataset="blocks", blocks_from_latest=100 ) ``` ALTERNATIVE WORKFLOW (more control): If you need more control, you can separate the steps: 1. Download data: result = query_dataset('blocks', blocks_from_latest=100, output_format='parquet') 2. Inspect schema: schema = get_sql_table_schema(result['files'][0]) 3. Run SQL query: query_sql("SELECT * FROM blocks", files=result['files']) Args: sql_query: SQL query to execute - using table names or read_parquet() dataset: The specific dataset to query (e.g., 'transactions', 'logs', 'balances') If None, will be extracted from the SQL query blocks: Block range specification as a string (e.g., '1000:1010') start_block: Start block number (alternative to blocks) end_block: End block number (alternative to blocks) use_latest: If True, query the latest block blocks_from_latest: Number of blocks before the latest to include contract: Contract address to filter by - IMPORTANT: Use this parameter for ALL address-based filtering regardless of the parameter name in the native cryo command (address, contract, etc.) force_refresh: Force download of new data even if it exists include_schema: Include schema information in the result Returns: SQL query results and metadata
get_sql_examples
Get example SQL queries for different blockchain datasets with DuckDB SQL WORKFLOW TIPS: 1. First download data: result = query_dataset('dataset_name', blocks='...', output_format='parquet') 2. Inspect schema: schema = get_sql_table_schema(result['files'][0]) 3. Run SQL: query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=result['files']) OR use the combined approach: - query_blockchain_sql(sql_query="SELECT * FROM read_parquet('...')", dataset='blocks', blocks='...') Returns: Dictionary of example queries categorized by dataset type and workflow patterns

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/z80dev/cryo-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server