Skip to main content
Glama
sbarron

Ambiance MCP Server

by sbarron

local_debug_context

Analyze error logs and stack traces to extract relevant code context from your project, using semantic embeddings and symbol matching to provide structured debugging insights.

Instructions

🐛 Gather comprehensive debug context from error logs and codebase analysis with focused embedding enhancement

When to use:

  • When you have error logs, stack traces, or console output to analyze

  • When debugging complex issues with multiple file involvement

  • When you need to understand error context across the codebase

  • Before using AI debugging tools to get structured context

What this does:

  • Parses error logs to extract file paths, line numbers, symbols, and error types

  • Extracts focused error contexts (~200 characters) for precise embedding queries

  • Uses tree-sitter to build symbol indexes for TypeScript/JavaScript/Python files

  • Searches codebase for symbol matches with surrounding context

  • ENHANCED: Uses semantic embeddings with focused error contexts for better relevance

  • Processes each error/warning separately for improved semantic matching

  • Ranks matches by relevance (severity, recency, frequency, semantic similarity)

  • Returns comprehensive debug report ready for AI analysis

Input: Error logs or stack traces as text Output: Structured debug context report with ranked matches and semantic insights

Performance: Fast local analysis, ~1-3 seconds depending on codebase size Embedding Features: Focused context queries reduce noise and improve relevance

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
logTextYesError logs, stack traces, or console output containing error information
projectPathYesProject root directory path. Required. Can be absolute or relative to workspace.
maxMatchesNoMaximum number of matches to return (default: 20)
formatNoOutput format preferencestructured
useEmbeddingsNoEnable embedding-based similarity search for enhanced context (requires local embeddings to be enabled)
embeddingSimilarityThresholdNoSimilarity threshold for embedding-based matches (lower = more results, higher = more precise)
maxSimilarChunksNoMaximum number of similar code chunks to include from embedding search
generateEmbeddingsIfMissingNoGenerate embeddings for project files if they don't exist (may take time for large projects)

Implementation Reference

  • Main handler function that executes the 'local_debug_context' tool. Parses error logs, extracts symbols and contexts, performs symbol and semantic embedding searches, calculates relevance scores, ranks matches, and returns a structured DebugContextReport.
    export async function handleLocalDebugContext(args: any): Promise<DebugContextReport> {
      const {
        logText,
        projectPath = process.cwd(),
        maxMatches = 20,
        format = 'structured',
        useEmbeddings = true,
        embeddingSimilarityThreshold = 0.2,
        maxSimilarChunks = 5,
        generateEmbeddingsIfMissing = false,
      } = args;
    
      if (!logText || typeof logText !== 'string') {
        throw new Error(
          '❌ logText is required and must be a string. Please provide error logs or stack traces.'
        );
      }
    
      const resolvedProjectPath = validateAndResolvePath(projectPath);
    
      logger.info('🐛 Starting local debug context gathering', {
        logLength: logText.length,
        projectPath: resolvedProjectPath,
        maxMatches,
        format,
        useEmbeddings,
        embeddingSimilarityThreshold,
        maxSimilarChunks,
        generateEmbeddingsIfMissing,
        embeddingsEnabled: LocalEmbeddingStorage.isEnabled(),
      });
    
      try {
        // Phase 1: Parse errors and gather context
        const errors = parseErrorLogs(logText);
        const symbols: string[] = [];
        const fileHints: string[] = [];
        let allFiles: string[] | null = null;
    
        for (const err of errors) {
          if (err.filePath) {
            const absPath = path.resolve(resolvedProjectPath, err.filePath);
            fileHints.push(absPath);
    
            if (!err.symbol) {
              const fileSymbols = await buildSymbolIndex(absPath);
              const match = fileSymbols.find(s => err.line >= s.startLine && err.line <= s.endLine);
              if (match) {
                err.symbol = match.name;
                symbols.push(match.name);
              }
            } else {
              symbols.push(err.symbol);
            }
          } else if (err.symbol) {
            symbols.push(err.symbol);
            if (!allFiles) {
              allFiles = await globby(['**/*.{ts,tsx,js,jsx,py}'], {
                cwd: resolvedProjectPath,
                absolute: true,
                ignore: ['node_modules/**', 'dist/**', '.git/**'],
              });
            }
          }
        }
    
        if (allFiles) {
          fileHints.push(...allFiles);
        }
    
        // Search for symbol matches
        const matches = await searchSymbols(symbols, resolvedProjectPath, fileHints, maxMatches);
    
        // Phase 1.5: Add embedding-enhanced search if enabled
        let allMatches = [...matches];
        let embeddingsUsed = false;
        let similarChunksFound = 0;
    
        if (useEmbeddings && LocalEmbeddingStorage.isEnabled() && symbols.length > 0) {
          try {
            // Get project identifier for embeddings
            const projectInfo = await projectIdentifier.identifyProject(resolvedProjectPath);
            const projectId = projectInfo.id;
    
            // Ensure embeddings exist
            const embeddingsReady = await ensureEmbeddingsForProject(
              projectId,
              resolvedProjectPath,
              generateEmbeddingsIfMissing
            );
    
            if (embeddingsReady) {
              // Search for semantically similar code using focused error contexts
              const semanticMatches = await searchSemanticSimilarities(
                projectId,
                errors,
                symbols,
                maxSimilarChunks,
                embeddingSimilarityThreshold
              );
    
              if (semanticMatches.length > 0) {
                allMatches = [...matches, ...semanticMatches];
                embeddingsUsed = true;
                similarChunksFound = semanticMatches.length;
    
                logger.info('🎯 Enhanced debug context with focused semantic matches', {
                  originalMatches: matches.length,
                  semanticMatches: semanticMatches.length,
                  totalMatches: allMatches.length,
                  errorsProcessed: errors.length,
                  avgSimilarity:
                    semanticMatches.reduce((sum, m) => sum + (m.embeddingSimilarity || 0), 0) /
                    semanticMatches.length,
                });
              }
            }
          } catch (error) {
            logger.warn('⚠️ Embedding enhancement failed, continuing with standard matches', {
              error: error instanceof Error ? error.message : String(error),
            });
          }
        }
    
        // Phase 2: Calculate scores and rank matches
        const freqMap = new Map<string, number>();
        for (const m of allMatches) {
          freqMap.set(m.symbol, (freqMap.get(m.symbol) || 0) + 1);
        }
    
        // Calculate scores for all matches
        for (const match of allMatches) {
          const frequency = freqMap.get(match.symbol) || 1;
          const { score, reason } = await calculateScore(match, errors, resolvedProjectPath, frequency);
          match.score = score;
          match.reason = reason;
        }
    
        // Sort by score and assign ranks
        allMatches.sort((a, b) => b.score - a.score);
        allMatches.forEach((m, i) => {
          m.rank = i + 1;
        });
    
        const uniqueFiles = [...new Set(allMatches.map(m => m.filePath))].length;
        const topMatches = allMatches.slice(0, 5).map(m => ({
          symbol: m.symbol,
          filePath: m.filePath,
          score: m.score,
          reason: m.reason,
        }));
    
        // Generate debugging suggestions
        const suggestions = generateDebugSuggestions(
          allMatches,
          errors,
          embeddingsUsed,
          similarChunksFound
        );
    
        const report: DebugContextReport = {
          errors,
          matches: allMatches,
          summary: {
            errorCount: errors.length,
            matchCount: allMatches.length,
            uniqueFiles,
            topMatches,
            embeddingsUsed,
            similarChunksFound,
            suggestions,
          },
        };
    
        logger.info('✅ Local debug context gathering completed', {
          errorCount: errors.length,
          matchCount: allMatches.length,
          uniqueFiles,
          topScore: allMatches[0]?.score || 0,
          embeddingsUsed,
          similarChunksFound,
        });
    
        return report;
      } catch (error) {
        logger.error('❌ Local debug context gathering failed', {
          error: (error as Error).message,
        });
        throw new Error(`Local debug context gathering failed: ${(error as Error).message}`);
      }
    }
  • Tool schema definition including name, detailed description, and comprehensive inputSchema with validation for parameters like logText, projectPath, maxMatches, useEmbeddings, etc.
    export const localDebugContextTool = {
      name: 'local_debug_context',
      description: `🐛 Gather comprehensive debug context from error logs and codebase analysis with focused embedding enhancement
    
    **When to use**:
    - When you have error logs, stack traces, or console output to analyze
    - When debugging complex issues with multiple file involvement
    - When you need to understand error context across the codebase
    - Before using AI debugging tools to get structured context
    
    **What this does**:
    - Parses error logs to extract file paths, line numbers, symbols, and error types
    - Extracts focused error contexts (~200 characters) for precise embedding queries
    - Uses tree-sitter to build symbol indexes for TypeScript/JavaScript/Python files
    - Searches codebase for symbol matches with surrounding context
    - **ENHANCED**: Uses semantic embeddings with focused error contexts for better relevance
    - Processes each error/warning separately for improved semantic matching
    - Ranks matches by relevance (severity, recency, frequency, semantic similarity)
    - Returns comprehensive debug report ready for AI analysis
    
    **Input**: Error logs or stack traces as text
    **Output**: Structured debug context report with ranked matches and semantic insights
    
    **Performance**: Fast local analysis, ~1-3 seconds depending on codebase size
    **Embedding Features**: Focused context queries reduce noise and improve relevance`,
      inputSchema: {
        type: 'object',
        properties: {
          logText: {
            type: 'string',
            description: 'Error logs, stack traces, or console output containing error information',
          },
          projectPath: {
            type: 'string',
            description:
              'Project root directory path. Required. Can be absolute or relative to workspace.',
          },
          maxMatches: {
            type: 'number',
            description: 'Maximum number of matches to return (default: 20)',
            default: 20,
            minimum: 1,
            maximum: 100,
          },
          format: {
            type: 'string',
            enum: ['structured', 'compact', 'detailed'],
            default: 'structured',
            description: 'Output format preference',
          },
          useEmbeddings: {
            type: 'boolean',
            default: true,
            description:
              'Enable embedding-based similarity search for enhanced context (requires local embeddings to be enabled)',
          },
          embeddingSimilarityThreshold: {
            type: 'number',
            default: 0.2,
            minimum: 0.0,
            maximum: 1.0,
            description:
              'Similarity threshold for embedding-based matches (lower = more results, higher = more precise)',
          },
          maxSimilarChunks: {
            type: 'number',
            default: 5,
            minimum: 1,
            maximum: 20,
            description: 'Maximum number of similar code chunks to include from embedding search',
          },
          generateEmbeddingsIfMissing: {
            type: 'boolean',
            default: false,
            description:
              "Generate embeddings for project files if they don't exist (may take time for large projects)",
          },
        },
        required: ['logText', 'projectPath'],
      },
    };
  • src/index.ts:126-142 (registration)
    Primary MCP server registration: Adds localDebugContextTool to the tools list and maps 'local_debug_context' to handleLocalDebugContext in the handlers object.
    this.tools = [
      ...(allowLocalContext ? [localSemanticCompactTool] : []),
      localProjectHintsTool,
      localFileSummaryTool,
      frontendInsightsTool,
      localDebugContextTool,
      astGrepTool,
    ];
    
    this.handlers = {
      ...(allowLocalContext ? { local_context: handleSemanticCompact } : {}),
      local_project_hints: handleProjectHints,
      local_file_summary: handleFileSummary,
      frontend_insights: handleFrontendInsights,
      local_debug_context: handleLocalDebugContext,
      ast_grep_search: handleAstGrep,
    };
  • Module-level registration: Exports debugTools array including localDebugContextTool and debugHandlers mapping 'local_debug_context' to its handler.
    // Tool arrays for easy registration
    export const debugTools = [localDebugContextTool, aiDebugTool];
    
    // Handler object for easy registration
    export const debugHandlers = {
      local_debug_context: handleLocalDebugContext,
      ai_debug: handleAIDebug,
    };
  • TypeScript interface defining the structured output format DebugContextReport returned by the handler, including parsed errors, ranked matches, and summary statistics.
    export interface DebugContextReport {
      errors: ParsedError[];
      matches: SearchMatch[];
      summary: {
        errorCount: number;
        matchCount: number;
        uniqueFiles: number;
        topMatches: Array<{
          symbol: string;
          filePath: string;
          score: number;
          reason: string;
        }>;
        embeddingsUsed?: boolean;
        similarChunksFound?: number;
        suggestions?: string[];
      };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it parses logs, extracts contexts, uses tree-sitter and embeddings, processes errors separately, ranks matches, and returns a report. It also mentions performance ('Fast local analysis, ~1-3 seconds') and embedding features. However, it lacks details on error handling or specific limitations, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., 'When to use', 'What this does'), making it easy to scan. It is appropriately sized for a complex tool, but some sentences could be more concise (e.g., the detailed bullet points in 'What this does' are slightly verbose). Overall, it's front-loaded and efficient, with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no annotations, no output schema), the description is mostly complete. It covers purpose, usage, behaviors, and performance. However, without an output schema, it only briefly mentions the output ('Returns comprehensive debug report'), lacking details on report structure or content, which is a minor gap for such a detailed tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, only briefly mentioning 'Input: Error logs or stack traces as text' and 'Output: Structured debug context report,' which are redundant with schema details. Thus, it meets the baseline of 3 without adding significant value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Gather comprehensive debug context from error logs and codebase analysis with focused embedding enhancement.' It specifies the verb ('gather'), resource ('debug context'), and method ('from error logs and codebase analysis'), distinguishing it from sibling tools like 'local_context' or 'local_file_summary' which lack the debugging focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes an explicit 'When to use' section with four bullet points detailing specific scenarios (e.g., 'When you have error logs, stack traces, or console output to analyze'), and it mentions using this tool 'Before using AI debugging tools to get structured context,' providing clear guidance on when to use it versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sbarron/AmbianceMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server