Skip to main content
Glama

lldb_run_command

Execute arbitrary LLDB debugging commands to inspect variables, set breakpoints, analyze crashes, or run any other LLDB operation for C/C++ programs.

Instructions

Execute an arbitrary LLDB command and return the output.

This is a flexible tool for running any LLDB command. Use this when
other specialized tools don't cover your specific need.

Common commands:
- 'help' - Show help for commands
- 'version' - Show LLDB version
- 'settings list' - Show all settings
- 'type summary list' - List type summaries
- 'platform list' - List available platforms

Args:
    params: RunCommandInput containing the command and optional target

Returns:
    str: Command output or error message

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function for the lldb_run_command tool. It takes RunCommandInput parameters, executes the LLDB command using the _run_lldb_command helper, and formats the output as markdown code block or error message.
    async def lldb_run_command(params: RunCommandInput) -> str:
        """Execute an arbitrary LLDB command and return the output.
    
        This is a flexible tool for running any LLDB command. Use this when
        other specialized tools don't cover your specific need.
    
        Common commands:
        - 'help' - Show help for commands
        - 'version' - Show LLDB version
        - 'settings list' - Show all settings
        - 'type summary list' - List type summaries
        - 'platform list' - List available platforms
    
        Args:
            params: RunCommandInput containing the command and optional target
    
        Returns:
            str: Command output or error message
        """
        result = _run_lldb_command(params.command, target=params.target, working_dir=params.working_dir)
    
        if result["success"]:
            return f"```\n{result['output']}\n```"
        else:
            return f"**Error:** {result['error']}\n\n**Output:**\n```\n{result['output']}\n```"
  • Pydantic input schema (RunCommandInput) defining the parameters for the lldb_run_command tool: command (required string), target (optional string), working_dir (optional string).
    class RunCommandInput(BaseModel):
        """Input for running arbitrary LLDB commands."""
    
        model_config = ConfigDict(str_strip_whitespace=True)
    
        command: str = Field(
            ...,
            description="The LLDB command to execute (e.g., 'help', 'version', 'breakpoint list')",
            min_length=1,
            max_length=2000,
        )
        target: str | None = Field(
            default=None, description="Path to the executable to debug (optional)"
        )
        working_dir: str | None = Field(default=None, description="Working directory for the command")
  • MCP tool registration using @mcp.tool decorator, specifying the tool name 'lldb_run_command' and annotations for client hints.
    @mcp.tool(
        name="lldb_run_command",
        annotations={
            "title": "Run LLDB Command",
            "readOnlyHint": False,
            "destructiveHint": False,
            "idempotentHint": False,
            "openWorldHint": False,
        },
    )
  • Helper function _run_lldb_command that executes LLDB commands via subprocess.run in batch mode, handling target, args, timeout, and returning structured result with success, output, error.
    def _run_lldb_command(
        command: str,
        target: str | None = None,
        args: list[str] | None = None,
        working_dir: str | None = None,
        timeout: int = 30,
    ) -> dict[str, Any]:
        """
        Execute an LLDB command and return the output.
    
        This runs LLDB in batch mode for simple commands.
        """
        cmd = [LLDB_EXECUTABLE]
    
        if target:
            cmd.extend(["--file", target])
    
        # Add batch commands
        cmd.extend(["--batch", "-o", command])
    
        if args:
            cmd.append("--")
            cmd.extend(args)
    
        try:
            result = subprocess.run(
                cmd, capture_output=True, text=True, timeout=timeout, cwd=working_dir or os.getcwd()
            )
            return {
                "success": result.returncode == 0,
                "output": result.stdout,
                "error": result.stderr if result.returncode != 0 else None,
                "return_code": result.returncode,
            }
        except subprocess.TimeoutExpired:
            return {
                "success": False,
                "output": "",
                "error": f"Command timed out after {timeout} seconds",
                "return_code": -1,
            }
        except FileNotFoundError:
            return {
                "success": False,
                "output": "",
                "error": f"LLDB executable not found at '{LLDB_EXECUTABLE}'. Please ensure LLDB is installed and in PATH.",
                "return_code": -1,
            }
        except Exception as e:
            return {"success": False, "output": "", "error": str(e), "return_code": -1}
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations. Annotations indicate it's not read-only, idempotent, or destructive, but the description clarifies it's a 'flexible tool for running any LLDB command' and provides common command examples (e.g., 'help', 'version'), which helps the agent understand typical use cases and potential outputs. However, it doesn't mention rate limits, authentication needs, or side effects, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with the core purpose, followed by usage guidelines, common commands, and parameter/return details. Every sentence adds value without redundancy, and it efficiently conveys necessary information in a compact format, making it easy for an agent to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (executing arbitrary commands) and the presence of an output schema (which handles return values), the description is mostly complete. It covers purpose, usage guidelines, examples, and parameter overview. However, it could improve by addressing potential errors or side effects of commands, but the output schema and annotations provide some structural support, making it sufficiently complete for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description includes an 'Args' section that explains the 'params' parameter contains 'RunCommandInput' with a command and optional target, and a 'Returns' section noting the output is a string. However, schema description coverage is 0%, meaning the input schema lacks descriptions for its properties. The description partially compensates by listing common commands and mentioning the target, but it doesn't fully detail all parameter semantics (e.g., working_dir usage or command constraints beyond examples).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Execute an arbitrary LLDB command and return the output.' It specifies the verb ('execute'), resource ('LLDB command'), and distinguishes from siblings by noting it's for 'any LLDB command' when 'other specialized tools don't cover your specific need.' This explicit differentiation from specialized sibling tools like lldb_backtrace or lldb_set_breakpoint makes it highly specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use this when other specialized tools don't cover your specific need.' This directly addresses alternatives by referencing the sibling tools (e.g., lldb_backtrace, lldb_set_breakpoint) without naming them individually, effectively guiding the agent to prefer specialized tools first and use this as a fallback.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/benpm/claude_lldb_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server