Skip to main content
Glama

debug_problem

Systematically debug software issues by generating hypotheses, test strategies, and resolution steps for described problems.

Instructions

Provides a systematic debugging approach for a described problem. Generates hypotheses, test strategies, and resolution steps.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
problemYesDescription of the problem/bug
symptomsNoObserved symptoms
contextNoEnvironment, recent changes, etc.

Implementation Reference

  • The handler function that executes the 'debug_problem' tool. It takes input args (problem, symptoms, context) and generates a comprehensive markdown debugging guide with phases, hypotheses, and tools.
    export function debugProblemHandler(args: any) {
        const { problem, symptoms = [], context = "" } = args;
    
        const debug = `# Debug Analysis: ${problem}
    
    ## Problem Statement
    ${problem}
    
    ## Observed Symptoms
    ${symptoms.length > 0 ? symptoms.map((s: string) => `- ${s}`).join("\n") : "No specific symptoms listed"}
    
    ## Context
    ${context || "No additional context provided"}
    
    ---
    
    ## Debugging Strategy
    
    ### Phase 1: Reproduce
    - [ ] Can you consistently reproduce the issue?
    - [ ] What are the exact steps to reproduce?
    - [ ] Does it happen in all environments?
    
    ### Phase 2: Isolate
    - [ ] What is the smallest code that shows the bug?
    - [ ] When did this start happening?
    - [ ] What changed recently?
    
    ### Phase 3: Hypotheses
    Based on the symptoms, possible causes:
    1. **Data Issue**: Invalid input or state
    2. **Logic Error**: Incorrect condition or algorithm
    3. **Timing Issue**: Race condition or async problem
    4. **Environment**: Configuration or dependency issue
    5. **Integration**: External service or API problem
    
    ### Phase 4: Test Each Hypothesis
    For each hypothesis above:
    - Add logging/breakpoints
    - Check relevant data
    - Verify assumptions
    
    ### Phase 5: Fix & Verify
    - [ ] Implement minimal fix
    - [ ] Verify fix resolves issue
    - [ ] Check for regressions
    - [ ] Add test to prevent recurrence
    
    ## Tools to Use
    - Debugger (breakpoints, step-through)
    - Logging (add strategic log points)
    - Network inspector (API issues)
    - Profiler (performance issues)
    `;
    
        return { content: [{ type: "text", text: debug }] };
    }
  • Zod schema definition for the 'debug_problem' tool inputs: required 'problem' string, optional 'symptoms' array and 'context' string.
    export const debugProblemSchema = {
        name: "debug_problem",
        description: "Provides a systematic debugging approach for a described problem. Generates hypotheses, test strategies, and resolution steps.",
        inputSchema: z.object({
            problem: z.string().describe("Description of the problem/bug"),
            symptoms: z.array(z.string()).optional().describe("Observed symptoms"),
            context: z.string().optional().describe("Environment, recent changes, etc.")
        })
    };
  • src/server.ts:92-92 (registration)
    Registration of 'debug_problem' tool in the HTTP server's toolRegistry Map.
    ["debug_problem", { schema: debugProblemSchema, handler: debugProblemHandler }],
  • src/index.ts:82-82 (registration)
    Registration of 'debug_problem' tool in the stdio server's toolRegistry Map.
    ["debug_problem", { schema: debugProblemSchema, handler: debugProblemHandler }],
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it describes what the tool generates (hypotheses, test strategies, resolution steps), it doesn't disclose important behavioral traits such as whether this is a read-only analysis tool or if it makes changes, what format the output takes, whether it requires specific permissions, or any rate limits. For a tool with no annotation coverage, this represents a significant gap in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence establishes the core purpose, and the second specifies the outputs. There's no wasted language, repetition, or unnecessary elaboration. The structure is front-loaded with the main purpose stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of debugging (which often involves multiple steps and considerations), the absence of annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns, how comprehensive the debugging approach is, whether it's interactive or one-shot, or how it handles different types of problems. For a tool that could have significant behavioral complexity, the description provides only basic functional information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so all parameters are documented in the schema itself. The description doesn't add any meaningful parameter semantics beyond what the schema provides - it mentions 'a described problem' which corresponds to the 'problem' parameter but doesn't elaborate on how parameters interact or provide usage examples. With complete schema coverage, the baseline of 3 is appropriate as the description doesn't compensate with additional parameter insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('provides a systematic debugging approach', 'generates hypotheses, test strategies, and resolution steps') and identifies the resource ('for a described problem'). It distinguishes itself from siblings like 'analyze_architecture' or 'brainstorm_solutions' by focusing specifically on debugging methodology rather than analysis or ideation. However, it doesn't explicitly contrast with close siblings like 'check_dependencies' or 'validate_code' which might also be used in debugging contexts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('for a described problem') but doesn't explicitly state when to use this tool versus alternatives. Given the sibling tools include 'check_dependencies', 'validate_code', and 'explain_code' which could all be part of debugging workflows, the description provides no guidance on whether this tool should be used first, last, or instead of those alternatives. The usage is implied rather than explicitly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/millsydotdev/Code-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server