Skip to main content
Glama

lldb_analyze_crash

Analyze program crashes and core dumps to identify root causes by examining backtraces, register states, and local variables.

Instructions

Analyze a crashed program or core dump to determine the cause.

This tool loads a core dump or crashed executable and provides: - Backtrace showing the crash location - Register state at crash time - Local variables in the crash frame - Loaded modules information Args: params: AnalyzeCrashInput with executable path and optional core file Returns: str: Crash analysis including backtrace, registers, and variables

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • The core handler function implementing the tool's logic: creates LLDB target from executable/core dump, runs commands for backtrace (bt all), registers, frame variables, and image list, then formats output as Markdown or JSON.
    async def lldb_analyze_crash(params: AnalyzeCrashInput) -> str: """Analyze a crashed program or core dump to determine the cause. This tool loads a core dump or crashed executable and provides: - Backtrace showing the crash location - Register state at crash time - Local variables in the crash frame - Loaded modules information Args: params: AnalyzeCrashInput with executable path and optional core file Returns: str: Crash analysis including backtrace, registers, and variables """ commands = [] if params.core_file: commands.append(f"target create {params.executable} --core {params.core_file}") else: commands.append(f"target create {params.executable}") commands.extend(["bt all", "register read", "frame variable", "image list"]) result = _run_lldb_script(commands, working_dir=params.working_dir) if params.response_format == ResponseFormat.JSON: return json.dumps( { "success": result["success"], "executable": params.executable, "core_file": params.core_file, "output": result["output"], "error": result.get("error"), }, indent=2, ) # Markdown format lines = [f"# Crash Analysis: {Path(params.executable).name}", ""] if params.core_file: lines.append(f"**Core file:** {params.core_file}") lines.append("") if result["success"]: lines.append("## Analysis Output") lines.append("```") lines.append(result["output"].strip()) lines.append("```") else: lines.append("## Error") lines.append(f"```\n{result.get('error', 'Unknown error')}\n```") return "\n".join(lines)
  • Pydantic BaseModel defining the input parameters for the tool: required executable path, optional core file, response format (markdown/json), and working directory.
    class AnalyzeCrashInput(BaseModel): """Input for analyzing a crashed program.""" model_config = ConfigDict(str_strip_whitespace=True) executable: str = Field(..., description="Path to the executable that crashed", min_length=1) core_file: str | None = Field(default=None, description="Path to the core dump file (optional)") response_format: ResponseFormat = Field( default=ResponseFormat.MARKDOWN, description="Output format: 'markdown' for human-readable or 'json' for structured data", ) working_dir: str | None = Field(default=None, description="Working directory for the analysis")
  • MCP decorator registering the tool with name 'lldb_analyze_crash' and annotations indicating it's read-only, idempotent, non-destructive, and not open-world.
    @mcp.tool( name="lldb_analyze_crash", annotations={ "title": "Analyze Crash Dump", "readOnlyHint": True, "destructiveHint": False, "idempotentHint": True, "openWorldHint": False, }, )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/benpm/claude_lldb_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server