Skip to main content
Glama

debug

Analyze errors and generate debugging strategies with solutions for code issues in development workflows.

Instructions

【调试助手】分析错误并生成调试策略和解决方案

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
errorNo错误信息(错误消息、堆栈跟踪)
contextNo相关代码或场景描述

Implementation Reference

  • The main handler function for the 'debug' tool. It takes error and context arguments and returns a structured prompt for debugging analysis.
    export async function debug(args: any) {
      try {
        const error = args?.error || "";
        const context = args?.context || "";
    
        const message = `请分析以下错误并提供调试策略:
    
    ❌ **错误信息**:
    ${error || "请提供错误信息(错误消息、堆栈跟踪等)"}
    
    📋 **上下文**:
    ${context || "请提供相关代码或场景描述"}
    
    ---
    
    🔍 **调试分析步骤**:
    
    **第一步:错误分类**
    - 确定错误类型(语法错误、运行时错误、逻辑错误)
    - 评估错误严重程度(崩溃、功能异常、性能问题)
    
    **第二步:问题定位**
    1. 分析错误堆栈,确定出错位置
    2. 识别可能的原因(至少列出 3 个)
    3. 检查相关代码上下文
    
    **第三步:调试策略**
    按优先级列出调试步骤:
    1. 快速验证:最可能的原因
    2. 添加日志:关键变量和执行路径
    3. 断点调试:问题代码段
    4. 单元测试:隔离问题
    5. 回归测试:确认修复
    
    **第四步:解决方案**
    - 临时方案(Quick Fix)
    - 根本方案(Root Cause Fix)
    - 预防措施(Prevention)
    
    **第五步:验证清单**
    - [ ] 错误已修复
    - [ ] 测试通过
    - [ ] 无副作用
    - [ ] 添加防御性代码
    - [ ] 更新文档
    
    ---
    
    💡 **常见错误模式**:
    - NullPointerException → 检查空值处理
    - ReferenceError → 检查变量声明和作用域
    - TypeError → 检查类型转换和数据结构
    - TimeoutError → 检查异步操作和网络请求
    - MemoryError → 检查内存泄漏和资源释放
    
    现在请按照上述步骤分析错误并提供具体的调试方案。`;
    
        return {
          content: [
            {
              type: "text",
              text: message,
            },
          ],
        };
      } catch (error) {
        const errorMessage =
          error instanceof Error ? error.message : String(error);
        return {
          content: [
            {
              type: "text",
              text: `❌ 生成调试策略失败: ${errorMessage}`,
            },
          ],
          isError: true,
        };
      }
    }
  • The input schema definition for the 'debug' tool, registered in the MCP ListToolsRequestHandler.
    {
      name: "debug",
      description: "【调试助手】分析错误并生成调试策略和解决方案",
      inputSchema: {
        type: "object",
        properties: {
          error: {
            type: "string",
            description: "错误信息(错误消息、堆栈跟踪)",
          },
          context: {
            type: "string",
            description: "相关代码或场景描述",
          },
        },
        required: [],
      },
    },
  • src/index.ts:471-472 (registration)
    The switch case in CallToolRequestHandler that dispatches calls to the debug handler function.
    case "debug":
      return await debug(args);
  • src/tools/index.ts:5-5 (registration)
    Barrel export re-exporting the debug handler from its implementation file.
    export { debug } from "./debug.js";
  • src/index.ts:11-15 (registration)
    Import statement importing the debug handler (along with others) from the tools index.
    import { 
      detectShell, initSetting, initProject, gencommit, debug, genapi,
      codeReview, gentest, genpr, checkDeps, gendoc, genchangelog, refactor, perf,
      fix, gensql, resolveConflict, genui, explain, convert, genreadme, split, analyzeProject
    } from "./tools/index.js";
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool analyzes errors and generates strategies/solutions, but doesn't disclose critical behavioral traits: whether it's read-only or mutative, what permissions or authentication might be needed, rate limits, output format (e.g., structured vs. text), or how it handles different error types. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: '【调试助手】分析错误并生成调试策略和解决方案' (Debugging assistant: analyze errors and generate debugging strategies and solutions). Every word earns its place by defining the tool's core function without redundancy. The structure is clear, with no wasted sentences or unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (error analysis and strategy generation), lack of annotations, and no output schema, the description is incomplete. It doesn't explain what the output looks like (e.g., structured strategies, step-by-step solutions), how errors are processed, or any behavioral constraints. For a tool that likely produces complex outputs, more context is needed to understand its full functionality and limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with both parameters ('error' and 'context') well-documented in the schema. The description doesn't add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't clarify format expectations or examples for 'error' or 'context'). With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '分析错误并生成调试策略和解决方案' (analyze errors and generate debugging strategies and solutions). It specifies the verb '分析' (analyze) and the resource '错误' (errors), and distinguishes itself from siblings like 'fix' or 'explain' by focusing on debugging strategy generation rather than direct fixes or explanations. However, it doesn't explicitly differentiate from all siblings (e.g., 'analyze_project' might overlap in analysis).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose 'debug' over siblings like 'fix' (for direct fixes), 'explain' (for error explanations), or 'analyze_project' (for broader analysis). There's no context about prerequisites, limitations, or specific scenarios where this tool is most appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mybolide/mcp-probe-kit'

If you have feedback or need assistance with the MCP directory API, please join our Discord server