Skip to main content
Glama
blockscout

Blockscout MCP Server

Official

get_transaction_logs

Retrieve enriched transaction logs with decoded event parameters and values. Analyze smart contract events, track token transfers, monitor DeFi interactions, debug emissions, and understand multi-contract flows. Supports pagination for extended data retrieval.

Instructions

Get comprehensive transaction logs. Unlike standard eth_getLogs, this tool returns enriched logs, primarily focusing on decoded event parameters with their types and values (if event decoding is applicable). Essential for analyzing smart contract events, tracking token transfers, monitoring DeFi protocol interactions, debugging event emissions, and understanding complex multi-contract transaction flows. **SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
chain_idYesThe ID of the blockchain
cursorNoThe pagination cursor from a previous response to get the next page of results.
transaction_hashYesTransaction hash

Implementation Reference

  • The primary handler function for the 'get_transaction_logs' tool. It fetches transaction logs from the Blockscout API, processes and curates the log items (including truncation handling), applies pagination, generates descriptive notes and data structure explanations, and returns a standardized ToolResponse with TransactionLogItem instances.
    async def get_transaction_logs( chain_id: Annotated[str, Field(description="The ID of the blockchain")], transaction_hash: Annotated[str, Field(description="Transaction hash")], ctx: Context, cursor: Annotated[ str | None, Field(description="The pagination cursor from a previous response to get the next page of results."), ] = None, ) -> ToolResponse[list[TransactionLogItem]]: """ Get comprehensive transaction logs. Unlike standard eth_getLogs, this tool returns enriched logs, primarily focusing on decoded event parameters with their types and values (if event decoding is applicable). Essential for analyzing smart contract events, tracking token transfers, monitoring DeFi protocol interactions, debugging event emissions, and understanding complex multi-contract transaction flows. **SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages. """ # noqa: E501 api_path = f"/api/v2/transactions/{transaction_hash}/logs" params = {} apply_cursor_to_params(cursor, params) await report_and_log_progress( ctx, progress=0.0, total=2.0, message=f"Starting to fetch transaction logs for {transaction_hash} on chain {chain_id}...", ) base_url = await get_blockscout_base_url(chain_id) await report_and_log_progress( ctx, progress=1.0, total=2.0, message="Resolved Blockscout instance URL. Fetching transaction logs..." ) response_data = await make_blockscout_request(base_url=base_url, api_path=api_path, params=params) original_items, was_truncated = _process_and_truncate_log_items(response_data.get("items", [])) log_items_dicts: list[dict] = [] for item in original_items: address_value = ( item.get("address", {}).get("hash") if isinstance(item.get("address"), dict) else item.get("address") ) curated_item = { "address": address_value, "block_number": item.get("block_number"), "topics": item.get("topics"), "data": item.get("data"), "decoded": item.get("decoded"), "index": item.get("index"), } if item.get("data_truncated"): curated_item["data_truncated"] = True log_items_dicts.append(curated_item) data_description = [ "Items Structure:", "- `address`: The contract address that emitted the log (string)", "- `block_number`: Block where the event was emitted", "- `index`: Log position within the block", "- `topics`: Raw indexed event parameters (first topic is event signature hash)", "- `data`: Raw non-indexed event parameters (hex encoded). **May be truncated.**", "- `decoded`: If available, the decoded event with its name and parameters", "- `data_truncated`: (Optional) `true` if the `data` or `decoded` field was shortened.", "Event Decoding in `decoded` field:", ( "- `method_call`: **Actually the event signature** " '(e.g., "Transfer(address indexed from, address indexed to, uint256 value)")' ), "- `method_id`: **Actually the event signature hash** (first 4 bytes of keccak256 hash)", "- `parameters`: Decoded event parameters with names, types, values, and indexing status", ] notes = None if was_truncated: notes = [ ( "One or more log items in this response had a `data` field that was " 'too large and has been truncated (indicated by `"data_truncated": true`).' ), ( "If the full log data is crucial for your analysis, you can retrieve the complete, " "untruncated logs for this transaction programmatically. For example, using curl:" ), f'`curl "{base_url}/api/v2/transactions/{transaction_hash}/logs"`', "You would then need to parse the JSON response and find the specific log by its index.", ] sliced_items, pagination = create_items_pagination( items=log_items_dicts, page_size=config.logs_page_size, tool_name="get_transaction_logs", next_call_base_params={"chain_id": chain_id, "transaction_hash": transaction_hash}, cursor_extractor=extract_log_cursor_params, ) log_items = [TransactionLogItem(**item) for item in sliced_items] await report_and_log_progress(ctx, progress=2.0, total=2.0, message="Successfully fetched transaction logs.") return build_tool_response( data=log_items, data_description=data_description, notes=notes, pagination=pagination, )
  • MCP server tool registration for get_transaction_logs, including annotations and structured_output setting.
    mcp.tool( structured_output=False, annotations=create_tool_annotations("Get Transaction Logs"), )(get_transaction_logs)
  • Pydantic schema definitions for the tool's response: LogItemBase (base log fields), TransactionLogItem (specific to transaction logs, inherits LogItemBase), and the generic ToolResponse used by the handler.
    # --- Model for get_address_logs and get_transaction_logs Data Payloads --- class LogItemBase(BaseModel): """Common fields for log items from Blockscout.""" model_config = ConfigDict(extra="allow") # Just to allow `data_truncated` field to be added to the response block_number: int | None = Field(None, description="The block where the event was emitted.") topics: list[str | None] | None = Field(None, description="Raw indexed event parameters.") data: str | None = Field( None, description="Raw non-indexed event parameters. May be truncated.", ) decoded: dict[str, Any] | None = Field(None, description="Decoded event parameters, if available.") index: int | None = Field(None, description="The log's position within the block.") # --- Model for get_address_logs Data Payload --- class AddressLogItem(LogItemBase): """Represents a single log item when the address is redundant.""" transaction_hash: str | None = Field(None, description="The transaction that triggered the event.") # --- Model for get_transaction_logs Data Payload --- class TransactionLogItem(LogItemBase): """Represents a single log item with its originating contract address.""" address: str | None = Field( None, description="The contract address that emitted the log.", ) # --- The Main Standardized Response Model --- class ToolResponse(BaseModel, Generic[T]): """A standardized, structured response for all MCP tools, generic over the data payload type.""" data: T = Field(description="The main data payload of the tool's response.") data_description: list[str] | None = Field( None, description="A list of notes explaining the structure, fields, or conventions of the 'data' payload.", ) notes: list[str] | None = Field( None, description=( "A list of important contextual notes, such as warnings about data truncation or data quality issues." ), ) instructions: list[str] | None = Field( None, description="A list of suggested follow-up actions or instructions for the LLM to plan its next steps.", ) pagination: PaginationInfo | None = Field( None, description="Pagination information, present only if the 'data' is a single page of a larger result set.", )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/blockscout/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server