Skip to main content
Glama

lldb_backtrace

Analyze program crashes by retrieving stack traces showing function calls, source locations, and frame details for debugging C/C++ applications.

Instructions

Get a stack backtrace showing the call chain.

The backtrace shows: - Frame numbers (0 is current frame) - Function names and addresses - Source file and line numbers (if available) - Module/library names Args: params: BacktraceInput with executable and stopping point Returns: str: Stack backtrace with frame information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • The handler function for 'lldb_backtrace' tool, including the @mcp.tool decorator. It sets up LLDB commands to load target, set breakpoint if needed, run to breakpoint or use core, get backtrace with options for all threads and limit, execute via _run_lldb_script, parse if JSON, format output.
    @mcp.tool( name="lldb_backtrace", annotations={ "title": "Get Backtrace", "readOnlyHint": True, "destructiveHint": False, "idempotentHint": True, "openWorldHint": False, }, ) async def lldb_backtrace(params: BacktraceInput) -> str: """Get a stack backtrace showing the call chain. The backtrace shows: - Frame numbers (0 is current frame) - Function names and addresses - Source file and line numbers (if available) - Module/library names Args: params: BacktraceInput with executable and stopping point Returns: str: Stack backtrace with frame information """ commands = [] if params.core_file: commands.append(f"target create {params.executable} --core {params.core_file}") else: commands.append(f"target create {params.executable}") if params.breakpoint: commands.append(f"breakpoint set --name {params.breakpoint}") commands.append("run" + (" " + " ".join(params.args) if params.args else "")) bt_cmd = "thread backtrace" if params.all_threads: bt_cmd = "thread backtrace all" bt_cmd += f" -c {params.limit}" commands.append(bt_cmd) if not params.core_file: commands.append("quit") result = _run_lldb_script(commands) if params.response_format == ResponseFormat.JSON: frames = _parse_backtrace(result["output"]) return json.dumps( {"success": result["success"], "frames": frames, "raw_output": result["output"]}, indent=2, ) lines = ["## Stack Backtrace", "", "```", result["output"].strip(), "```"] return "\n".join(lines)
  • Pydantic input schema for the lldb_backtrace tool defining parameters like executable path, breakpoint, core dump, thread options, frame limit, args, and output format.
    class BacktraceInput(BaseModel): """Input for getting a backtrace.""" model_config = ConfigDict(str_strip_whitespace=True) executable: str = Field(..., description="Path to the executable", min_length=1) breakpoint: str | None = Field( default=None, description="Breakpoint location to stop at (or use with core file)" ) core_file: str | None = Field( default=None, description="Path to core dump file for post-mortem analysis" ) all_threads: bool = Field(default=False, description="Show backtraces for all threads") limit: int = Field(default=50, description="Maximum number of frames to show", ge=1, le=1000) args: list[str] | None = Field( default=None, description="Command-line arguments to pass to the program" ) response_format: ResponseFormat = Field( default=ResponseFormat.MARKDOWN, description="Output format" )
  • MCP tool registration decorator specifying the name 'lldb_backtrace' and annotations for tool behavior.
    @mcp.tool( name="lldb_backtrace", annotations={ "title": "Get Backtrace", "readOnlyHint": True, "destructiveHint": False, "idempotentHint": True, "openWorldHint": False, }, )
  • Helper function that parses raw LLDB backtrace output into a structured list of frame dictionaries, used when response_format is JSON.
    def _parse_backtrace(output: str) -> list[dict[str, Any]]: """Parse LLDB backtrace output into structured data.""" frames = [] frame_pattern = re.compile( r"frame #(\d+): (0x[0-9a-fA-F]+) (.+?)(?:`(.+?))?(?:\s+\+\s+(\d+))?(?:\s+at\s+(.+):(\d+))?" ) for line in output.split("\n"): match = frame_pattern.search(line) if match: frames.append( { "frame_number": int(match.group(1)), "address": match.group(2), "module": match.group(3).strip() if match.group(3) else None, "function": match.group(4).strip() if match.group(4) else None, "offset": int(match.group(5)) if match.group(5) else None, "file": match.group(6) if match.group(6) else None, "line": int(match.group(7)) if match.group(7) else None, } ) return frames
  • Core helper function to execute a sequence of LLDB commands in batch mode via subprocess, capturing output and handling errors/timeouts. Used by the handler to run the backtrace commands.
    def _run_lldb_script( commands: list[str], target: str | None = None, working_dir: str | None = None, timeout: int = 60, ) -> dict[str, Any]: """ Execute multiple LLDB commands in sequence. """ cmd = [LLDB_EXECUTABLE] if target: cmd.extend(["--file", target]) cmd.append("--batch") for command in commands: cmd.extend(["-o", command]) try: result = subprocess.run( cmd, capture_output=True, text=True, timeout=timeout, cwd=working_dir or os.getcwd() ) return { "success": result.returncode == 0, "output": result.stdout, "error": result.stderr if result.returncode != 0 else None, "return_code": result.returncode, } except subprocess.TimeoutExpired: return { "success": False, "output": "", "error": f"Commands timed out after {timeout} seconds", "return_code": -1, } except Exception as e: return {"success": False, "output": "", "error": str(e), "return_code": -1}

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/benpm/claude_lldb_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server