Skip to main content
Glama

lldb_run_command

Execute any LLDB debugging command to inspect variables, set breakpoints, analyze crashes, or run specialized debugging operations in C/C++ programs.

Instructions

Execute an arbitrary LLDB command and return the output.

This is a flexible tool for running any LLDB command. Use this when other specialized tools don't cover your specific need. Common commands: - 'help' - Show help for commands - 'version' - Show LLDB version - 'settings list' - Show all settings - 'type summary list' - List type summaries - 'platform list' - List available platforms Args: params: RunCommandInput containing the command and optional target Returns: str: Command output or error message

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • The main async handler function for the 'lldb_run_command' tool. It takes RunCommandInput parameters, executes the LLDB command via _run_lldb_command helper, and formats the markdown output.
    async def lldb_run_command(params: RunCommandInput) -> str: """Execute an arbitrary LLDB command and return the output. This is a flexible tool for running any LLDB command. Use this when other specialized tools don't cover your specific need. Common commands: - 'help' - Show help for commands - 'version' - Show LLDB version - 'settings list' - Show all settings - 'type summary list' - List type summaries - 'platform list' - List available platforms Args: params: RunCommandInput containing the command and optional target Returns: str: Command output or error message """ result = _run_lldb_command(params.command, target=params.target, working_dir=params.working_dir) if result["success"]: return f"```\n{result['output']}\n```" else: return f"**Error:** {result['error']}\n\n**Output:**\n```\n{result['output']}\n```"
  • Pydantic BaseModel defining the input schema for the tool, including required 'command' field and optional 'target' and 'working_dir'.
    class RunCommandInput(BaseModel): """Input for running arbitrary LLDB commands.""" model_config = ConfigDict(str_strip_whitespace=True) command: str = Field( ..., description="The LLDB command to execute (e.g., 'help', 'version', 'breakpoint list')", min_length=1, max_length=2000, ) target: str | None = Field( default=None, description="Path to the executable to debug (optional)" ) working_dir: str | None = Field(default=None, description="Working directory for the command")
  • The @mcp.tool decorator that registers the lldb_run_command function as an MCP tool with the specified name and annotations/hints.
    @mcp.tool( name="lldb_run_command", annotations={ "title": "Run LLDB Command", "readOnlyHint": False, "destructiveHint": False, "idempotentHint": False, "openWorldHint": False, }, )
  • Supporting helper function that executes a single LLDB command using subprocess.run in batch mode, capturing output and handling errors/timeouts. Called by the handler.
    def _run_lldb_command( command: str, target: str | None = None, args: list[str] | None = None, working_dir: str | None = None, timeout: int = 30, ) -> dict[str, Any]: """ Execute an LLDB command and return the output. This runs LLDB in batch mode for simple commands. """ cmd = [LLDB_EXECUTABLE] if target: cmd.extend(["--file", target]) # Add batch commands cmd.extend(["--batch", "-o", command]) if args: cmd.append("--") cmd.extend(args) try: result = subprocess.run( cmd, capture_output=True, text=True, timeout=timeout, cwd=working_dir or os.getcwd() ) return { "success": result.returncode == 0, "output": result.stdout, "error": result.stderr if result.returncode != 0 else None, "return_code": result.returncode, } except subprocess.TimeoutExpired: return { "success": False, "output": "", "error": f"Command timed out after {timeout} seconds", "return_code": -1, } except FileNotFoundError: return { "success": False, "output": "", "error": f"LLDB executable not found at '{LLDB_EXECUTABLE}'. Please ensure LLDB is installed and in PATH.", "return_code": -1, } except Exception as e: return {"success": False, "output": "", "error": str(e), "return_code": -1}

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/benpm/claude_lldb_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server