Skip to main content
Glama
blockscout

Blockscout MCP Server

Official

get_transaction_logs

Read-only

Retrieve enriched transaction logs with decoded event parameters to analyze smart contract events, track token transfers, and monitor DeFi protocol interactions.

Instructions

Get comprehensive transaction logs.
Unlike standard eth_getLogs, this tool returns enriched logs, primarily focusing on decoded event parameters with their types and values (if event decoding is applicable).
Essential for analyzing smart contract events, tracking token transfers, monitoring DeFi protocol interactions, debugging event emissions, and understanding complex multi-contract transaction flows.
**SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
chain_idYesThe ID of the blockchain
transaction_hashYesTransaction hash
cursorNoThe pagination cursor from a previous response to get the next page of results.

Implementation Reference

  • Main execution logic for the get_transaction_logs tool: fetches logs from Blockscout API, processes and curates them, handles pagination and truncation, and returns a standardized ToolResponse with TransactionLogItem instances.
    @log_tool_invocation
    async def get_transaction_logs(
        chain_id: Annotated[str, Field(description="The ID of the blockchain")],
        transaction_hash: Annotated[str, Field(description="Transaction hash")],
        ctx: Context,
        cursor: Annotated[
            str | None,
            Field(description="The pagination cursor from a previous response to get the next page of results."),
        ] = None,
    ) -> ToolResponse[list[TransactionLogItem]]:
        """
        Get comprehensive transaction logs.
        Unlike standard eth_getLogs, this tool returns enriched logs, primarily focusing on decoded event parameters with their types and values (if event decoding is applicable).
        Essential for analyzing smart contract events, tracking token transfers, monitoring DeFi protocol interactions, debugging event emissions, and understanding complex multi-contract transaction flows.
        **SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.
        """  # noqa: E501
        api_path = f"/api/v2/transactions/{transaction_hash}/logs"
        params = {}
    
        apply_cursor_to_params(cursor, params)
    
        await report_and_log_progress(
            ctx,
            progress=0.0,
            total=2.0,
            message=f"Starting to fetch transaction logs for {transaction_hash} on chain {chain_id}...",
        )
    
        base_url = await get_blockscout_base_url(chain_id)
    
        await report_and_log_progress(
            ctx, progress=1.0, total=2.0, message="Resolved Blockscout instance URL. Fetching transaction logs..."
        )
    
        response_data = await make_blockscout_request(base_url=base_url, api_path=api_path, params=params)
    
        original_items, was_truncated = _process_and_truncate_log_items(response_data.get("items", []))
    
        log_items_dicts: list[dict] = []
        for item in original_items:
            address_value = (
                item.get("address", {}).get("hash") if isinstance(item.get("address"), dict) else item.get("address")
            )
            curated_item = {
                "address": address_value,
                "block_number": item.get("block_number"),
                "topics": item.get("topics"),
                "data": item.get("data"),
                "decoded": item.get("decoded"),
                "index": item.get("index"),
            }
            if item.get("data_truncated"):
                curated_item["data_truncated"] = True
            log_items_dicts.append(curated_item)
    
        data_description = [
            "Items Structure:",
            "- `address`: The contract address that emitted the log (string)",
            "- `block_number`: Block where the event was emitted",
            "- `index`: Log position within the block",
            "- `topics`: Raw indexed event parameters (first topic is event signature hash)",
            "- `data`: Raw non-indexed event parameters (hex encoded). **May be truncated.**",
            "- `decoded`: If available, the decoded event with its name and parameters",
            "- `data_truncated`: (Optional) `true` if the `data` or `decoded` field was shortened.",
            "Event Decoding in `decoded` field:",
            (
                "- `method_call`: **Actually the event signature** "
                '(e.g., "Transfer(address indexed from, address indexed to, uint256 value)")'
            ),
            "- `method_id`: **Actually the event signature hash** (first 4 bytes of keccak256 hash)",
            "- `parameters`: Decoded event parameters with names, types, values, and indexing status",
        ]
    
        notes = None
        if was_truncated:
            notes = [
                (
                    "One or more log items in this response had a `data` field that was "
                    'too large and has been truncated (indicated by `"data_truncated": true`).'
                ),
                (
                    "If the full log data is crucial for your analysis, you can retrieve the complete, "
                    "untruncated logs for this transaction programmatically. For example, using curl:"
                ),
                f'`curl "{base_url}/api/v2/transactions/{transaction_hash}/logs"`',
                "You would then need to parse the JSON response and find the specific log by its index.",
            ]
    
        sliced_items, pagination = create_items_pagination(
            items=log_items_dicts,
            page_size=config.logs_page_size,
            tool_name="get_transaction_logs",
            next_call_base_params={"chain_id": chain_id, "transaction_hash": transaction_hash},
            cursor_extractor=extract_log_cursor_params,
        )
    
        log_items = [TransactionLogItem(**item) for item in sliced_items]
    
        await report_and_log_progress(ctx, progress=2.0, total=2.0, message="Successfully fetched transaction logs.")
    
        return build_tool_response(
            data=log_items,
            data_description=data_description,
            notes=notes,
            pagination=pagination,
        )
  • MCP server registration of the get_transaction_logs tool function.
    mcp.tool(
        structured_output=False,
        annotations=create_tool_annotations("Get Transaction Logs"),
    )(get_transaction_logs)
  • Pydantic model for individual transaction log items returned by the tool.
    # --- Model for get_transaction_logs Data Payload ---
    class TransactionLogItem(LogItemBase):
        """Represents a single log item with its originating contract address."""
    
        address: str | None = Field(
            None,
            description="The contract address that emitted the log.",
        )
  • Base Pydantic model for common log item fields used in get_transaction_logs response (extended by TransactionLogItem).
    class LogItemBase(BaseModel):
        """Common fields for log items from Blockscout."""
    
        model_config = ConfigDict(extra="allow")  # Just to allow `data_truncated` field to be added to the response
    
        block_number: int | None = Field(None, description="The block where the event was emitted.")
        topics: list[str | None] | None = Field(None, description="Raw indexed event parameters.")
        data: str | None = Field(
            None,
            description="Raw non-indexed event parameters. May be truncated.",
        )
        decoded: dict[str, Any] | None = Field(None, description="Decoded event parameters, if available.")
        index: int | None = Field(None, description="The log's position within the block.")
  • Import statement for the get_transaction_logs handler in the MCP server module.
    from blockscout_mcp_server.tools.transaction.get_transaction_logs import get_transaction_logs
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds valuable behavioral context: it explains the enriched nature of the logs (decoded event parameters with types/values), mentions pagination support with specific implementation details, and clarifies the focus on event analysis rather than raw data. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with the core purpose, differentiates from alternatives, lists use cases, and ends with pagination details. Every sentence adds value, though the use case list could be slightly more concise. Good front-loading of essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (transaction log analysis with decoding), the description provides good context about what makes this tool special (enriched logs, decoded parameters). With annotations covering safety/scope and 100% schema coverage, the main gap is no output schema, but the description gives some indication of return format (pagination field, next_call). Could benefit from more detail about response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters (chain_id, transaction_hash, cursor). The description doesn't add any parameter-specific semantics beyond what's in the schema, but it does mention pagination context which relates to the cursor parameter. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get comprehensive transaction logs' with specific differentiation from 'standard eth_getLogs' by emphasizing enriched logs with decoded event parameters. It distinguishes from sibling tools like get_transaction_info by focusing on logs/events rather than general transaction data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Essential for analyzing smart contract events, tracking token transfers, monitoring DeFi protocol interactions, debugging event emissions, and understanding complex multi-contract transaction flows.' This gives clear context for when to use this tool versus alternatives like get_transaction_info or get_transactions_by_address.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/blockscout/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server