Skip to main content
Glama

lldb_read_memory

Read-onlyIdempotent

Read memory contents from specified addresses in C/C++ programs using hex, binary, decimal, string, or instruction formats for debugging analysis.

Instructions

Read and display memory contents at a specified address.

Memory can be displayed in various formats:
- 'x': Hexadecimal (default)
- 'b': Binary
- 'd': Decimal
- 's': String (null-terminated)
- 'i': Instructions (disassembly)

Args:
    params: ReadMemoryInput with address, count, and format

Returns:
    str: Memory contents in requested format

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function that implements the lldb_read_memory tool logic. It constructs LLDB commands to read memory at the specified address, optionally sets a breakpoint and runs to it, executes the memory read command, and formats the output as Markdown.
    async def lldb_read_memory(params: ReadMemoryInput) -> str:
        """Read and display memory contents at a specified address.
    
        Memory can be displayed in various formats:
        - 'x': Hexadecimal (default)
        - 'b': Binary
        - 'd': Decimal
        - 's': String (null-terminated)
        - 'i': Instructions (disassembly)
    
        Args:
            params: ReadMemoryInput with address, count, and format
    
        Returns:
            str: Memory contents in requested format
        """
        commands = [f"target create {params.executable}"]
    
        if params.breakpoint:
            commands.extend(
                [
                    f"breakpoint set --name {params.breakpoint}",
                    "run",
                ]
            )
    
        mem_cmd = f"memory read --format {params.format} --count {params.count} {params.address}"
        commands.append(mem_cmd)
    
        if params.breakpoint:
            commands.append("quit")
    
        result = _run_lldb_script(commands)
    
        return f"## Memory at `{params.address}`\n\n```\n{result['output'].strip()}\n```"
  • The @mcp.tool decorator registers the lldb_read_memory tool with the MCP server, specifying its name and annotations describing its behavior (read-only, idempotent, etc.).
    @mcp.tool(
        name="lldb_read_memory",
        annotations={
            "title": "Read Memory",
            "readOnlyHint": True,
            "destructiveHint": False,
            "idempotentHint": True,
            "openWorldHint": False,
        },
    )
  • Pydantic model defining the input schema for the lldb_read_memory tool, including fields for executable path, memory address, byte count, output format, and optional breakpoint.
    class ReadMemoryInput(BaseModel):
        """Input for reading memory."""
    
        model_config = ConfigDict(str_strip_whitespace=True)
    
        executable: str = Field(..., description="Path to the executable", min_length=1)
        address: str = Field(
            ..., description="Memory address to read from (hex, e.g., '0x7fff5fbff000')", min_length=1
        )
        count: int = Field(default=64, description="Number of bytes to read", ge=1, le=4096)
        format: str = Field(
            default="x",
            description="Output format: 'x' (hex), 'b' (binary), 'd' (decimal), 's' (string), 'i' (instructions)",
        )
        breakpoint: str | None = Field(
            default=None, description="Breakpoint location to stop at before reading memory"
        )
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering safety aspects. The description adds valuable behavioral context by explaining the different display formats available, which isn't covered by annotations. However, it doesn't mention potential limitations like address validity requirements or execution state dependencies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with clear front-loading of the main purpose. The format list is useful but could be more concise. The Args/Returns section adds structure but duplicates some information. Overall efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, annotations cover safety aspects well, and the description adds format context. However, with 0% schema coverage and multiple parameters, the description doesn't fully compensate for missing parameter documentation. The existence of an output schema reduces the need to explain return values, but parameter semantics remain incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the schema provides no parameter descriptions. The description compensates by listing format options and mentioning address, count, and format parameters, but doesn't fully document all parameters (executable, breakpoint) or provide detailed semantics. The Args/Returns section adds some structure but remains incomplete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Read and display') and resource ('memory contents at a specified address'). It distinguishes from siblings like lldb_disassemble (which focuses on instructions) and lldb_examine_variables (which focuses on variables) by specifying memory reading functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through format options and parameter documentation, but doesn't explicitly state when to use this tool versus alternatives like lldb_disassemble or lldb_examine_variables. No explicit when-not-to-use guidance or prerequisites are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/benpm/claude_lldb_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server