Skip to main content
Glama

lldb_evaluate

Evaluate C/C++ expressions during debugging to inspect variables, call functions, and analyze program state within the LLDB debugger context.

Instructions

Evaluate a C/C++ expression in the debugger context.

Expressions can include: - Variable access: 'my_var', 'ptr->member' - Array indexing: 'array[5]' - Function calls: 'strlen(str)' - Casts: '(int*)ptr' - Arithmetic: 'x + y * 2' - sizeof: 'sizeof(MyStruct)' Args: params: EvaluateExpressionInput with expression and context Returns: str: Expression result with type information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • The handler function decorated with @mcp.tool(name="lldb_evaluate"), which evaluates C/C++ expressions using LLDB by creating a target, setting breakpoint, running, executing the expression command, and formatting the output.
    @mcp.tool( name="lldb_evaluate", annotations={ "title": "Evaluate Expression", "readOnlyHint": True, "destructiveHint": False, "idempotentHint": True, "openWorldHint": False, }, ) async def lldb_evaluate(params: EvaluateExpressionInput) -> str: """Evaluate a C/C++ expression in the debugger context. Expressions can include: - Variable access: 'my_var', 'ptr->member' - Array indexing: 'array[5]' - Function calls: 'strlen(str)' - Casts: '(int*)ptr' - Arithmetic: 'x + y * 2' - sizeof: 'sizeof(MyStruct)' Args: params: EvaluateExpressionInput with expression and context Returns: str: Expression result with type information """ commands = [ f"target create {params.executable}", f"breakpoint set --name {params.breakpoint}", "run" + (" " + " ".join(params.args) if params.args else ""), f"expression {params.expression}", "quit", ] result = _run_lldb_script(commands) return f"## Expression: `{params.expression}`\n\n```\n{result['output'].strip()}\n```"
  • Pydantic BaseModel defining the input schema for the lldb_evaluate tool, including executable path, expression to evaluate, breakpoint, and optional args.
    class EvaluateExpressionInput(BaseModel): """Input for evaluating expressions.""" model_config = ConfigDict(str_strip_whitespace=True) executable: str = Field(..., description="Path to the executable", min_length=1) expression: str = Field( ..., description="C/C++ expression to evaluate (e.g., 'sizeof(int)', 'ptr->member', 'array[5]')", min_length=1, ) breakpoint: str = Field( ..., description="Breakpoint location for evaluation context", min_length=1 ) args: list[str] | None = Field( default=None, description="Command-line arguments to pass to the program" )
  • Helper function _run_lldb_script that executes a list of LLDB commands via a temporary script file, capturing output and handling errors. Used by lldb_evaluate to run the LLDB session.
    def _run_lldb_script( commands: list[str], target: str | None = None, working_dir: str | None = None, timeout: int = 60, ) -> dict[str, Any]: """ Execute multiple LLDB commands in sequence. """ cmd = [LLDB_EXECUTABLE] if target: cmd.extend(["--file", target]) cmd.append("--batch") for command in commands: cmd.extend(["-o", command]) try: result = subprocess.run( cmd, capture_output=True, text=True, timeout=timeout, cwd=working_dir or os.getcwd() ) return { "success": result.returncode == 0, "output": result.stdout, "error": result.stderr if result.returncode != 0 else None, "return_code": result.returncode, } except subprocess.TimeoutExpired: return { "success": False, "output": "", "error": f"Commands timed out after {timeout} seconds", "return_code": -1, } except Exception as e: return {"success": False, "output": "", "error": str(e), "return_code": -1} # ============================================================================= # Initialize MCP Server # ============================================================================= mcp = FastMCP(SERVER_NAME) # ============================================================================= # Input Models # ============================================================================= class ResponseFormat(str, Enum): """Output format for tool responses."""

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/benpm/claude_lldb_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server