Skip to main content
Glama
sbarron

Ambiance MCP Server

by sbarron

local_context

Analyzes local codebases to provide focused context for understanding, debugging, or tracing code. Uses AST parsing and query-aware retrieval to deliver relevant code snippets and actionable insights without external dependencies.

Instructions

๐Ÿš€ Enhanced local context with deterministic query-aware retrieval, AST-grep, and actionable intelligence. Provides: (1) deterministic AnswerDraft, (2) ranked JumpTargets, (3) tight MiniBundle (โ‰ค3k tokens), (4) NextActionsโ€”all using AST + static heuristics. Optional embedding enhancement when available. Completely offline with zero external dependencies for core functionality.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesQuery to focus analysis (required for enhanced mode). Example: "How does database connection and local database storage work?"
taskTypeNoType of analysis task - affects query processing and output formatunderstand
maxSimilarChunksNoMaximum number of semantically similar code chunks to retrieve. Higher values (30-50) provide broader coverage for exploration; lower values (10-15) focus on highly relevant matches. Default 20 balances breadth and relevance.
maxTokensNoToken budget for mini-bundle assembly
generateEmbeddingsIfMissingNoGenerate embeddings if missing (requires OpenAI API key) - leave false for pure AST mode
useProjectHintsCacheNoReuse project_hints indices for faster processing
astQueriesNoOptional custom AST queries to supplement automatic detection
attackPlanNoAnalysis strategy: auto-detect from query, or specify: init-read-write (DB/storage), api-route (endpoints), auth (authentication), error-driven (debugging)auto
projectPathYesProject directory path. Required. Can be absolute or relative to workspace.
folderPathNoAnalyze specific folder (falls back to legacy mode if enhanced analysis unavailable)
formatNoOutput format: enhanced (new format with jump targets), system-map (architecture overview), structured (legacy), compact, xmlenhanced
excludePatternsNoAdditional patterns to exclude from analysis (e.g., ["*.md", "docs/**", "*.test.js"])
useEmbeddingsNoUse embeddings for similarity search if available (legacy parameter)
embeddingSimilarityThresholdNoMinimum similarity score (0.0-1.0) for including chunks. Lower values (0.15-0.2) cast a wider net for related code; higher values (0.25-0.35) return only close matches. Use lower thresholds when exploring unfamiliar code.

Implementation Reference

  • Defines the tool schema for 'local_context' including input parameters like query, taskType, maxTokens, attackPlan, etc., with detailed descriptions and validation.
    export const localSemanticCompactTool = {
      name: 'local_context',
      description:
        '๐Ÿš€ Enhanced local context with deterministic query-aware retrieval, AST-grep, and actionable intelligence. Provides: (1) deterministic AnswerDraft, (2) ranked JumpTargets, (3) tight MiniBundle (โ‰ค3k tokens), (4) NextActionsโ€”all using AST + static heuristics. Optional embedding enhancement when available. Completely offline with zero external dependencies for core functionality.',
      inputSchema: {
        type: 'object',
        properties: {
          query: {
            type: 'string',
            description:
              'Query to focus analysis (required for enhanced mode). Example: "How does database connection and local database storage work?"',
          },
          taskType: {
            type: 'string',
            enum: ['understand', 'debug', 'trace', 'spec', 'test'],
            default: 'understand',
            description: 'Type of analysis task - affects query processing and output format',
          },
          maxSimilarChunks: {
            type: 'number',
            default: 20,
            minimum: 5,
            maximum: 50,
            description:
              'Maximum number of semantically similar code chunks to retrieve. Higher values (30-50) provide broader coverage for exploration; lower values (10-15) focus on highly relevant matches. Default 20 balances breadth and relevance.',
          },
          maxTokens: {
            type: 'number',
            default: 3000,
            minimum: 1000,
            maximum: 8000,
            description: 'Token budget for mini-bundle assembly',
          },
          generateEmbeddingsIfMissing: {
            type: 'boolean',
            default: false,
            description:
              'Generate embeddings if missing (requires OpenAI API key) - leave false for pure AST mode',
          },
          useProjectHintsCache: {
            type: 'boolean',
            default: true,
            description: 'Reuse project_hints indices for faster processing',
          },
          astQueries: {
            type: 'array',
            items: { type: 'object' },
            description: 'Optional custom AST queries to supplement automatic detection',
          },
          attackPlan: {
            type: 'string',
            enum: ['auto', 'init-read-write', 'api-route', 'error-driven', 'auth'],
            default: 'auto',
            description:
              'Analysis strategy: auto-detect from query, or specify: init-read-write (DB/storage), api-route (endpoints), auth (authentication), error-driven (debugging)',
          },
          projectPath: {
            type: 'string',
            description: 'Project directory path. Required. Can be absolute or relative to workspace.',
          },
          folderPath: {
            type: 'string',
            description:
              'Analyze specific folder (falls back to legacy mode if enhanced analysis unavailable)',
          },
          format: {
            type: 'string',
            enum: ['xml', 'structured', 'compact', 'enhanced', 'system-map'],
            default: 'enhanced',
            description:
              'Output format: enhanced (new format with jump targets), system-map (architecture overview), structured (legacy), compact, xml',
          },
          excludePatterns: {
            type: 'array',
            items: { type: 'string' },
            description:
              'Additional patterns to exclude from analysis (e.g., ["*.md", "docs/**", "*.test.js"])',
          },
          useEmbeddings: {
            type: 'boolean',
            default: false,
            description: 'Use embeddings for similarity search if available (legacy parameter)',
          },
          embeddingSimilarityThreshold: {
            type: 'number',
            default: 0.2,
            minimum: 0.0,
            maximum: 1.0,
            description:
              'Minimum similarity score (0.0-1.0) for including chunks. Lower values (0.15-0.2) cast a wider net for related code; higher values (0.25-0.35) return only close matches. Use lower thresholds when exploring unfamiliar code.',
          },
        },
        required: ['query', 'projectPath'],
      },
    };
  • Main execution handler for local_context tool. Handles enhanced AST-based retrieval via localContext(), embedding-enhanced compaction, legacy modes, and formats output with token limits.
    export async function handleSemanticCompact(args: any): Promise<any> {
      // Validate that projectPath is provided
      if (!args?.projectPath) {
        throw new Error(
          'โŒ projectPath is required. Please provide an absolute path to the project directory.'
        );
      }
    
      // Compose a single-flight key early to dedupe duplicate concurrent calls
      const singleFlightKey = (() => {
        try {
          const q = (args?.query || '').toString().slice(0, 200);
          const p = validateAndResolvePath(args.projectPath);
          const f = args?.format || 'enhanced';
          return `${path.resolve(p)}::${f}::${q}`;
        } catch {
          return `default-key`;
        }
      })();
    
      if (inFlightRequests.has(singleFlightKey)) {
        logger.info('๐Ÿ” Single-flight: returning existing in-flight result for local_context', {
          key: singleFlightKey,
        });
        return inFlightRequests.get(singleFlightKey)!;
      }
    
      const execute = async () => {
        try {
          const {
            // New enhanced parameters
            query,
            taskType = 'understand',
            maxSimilarChunks = 20,
            maxTokens = 3000,
            // If undefined, we will auto-generate embeddings when missing
            generateEmbeddingsIfMissing,
            useProjectHintsCache = true,
            astQueries = [],
            attackPlan = 'auto',
            format = 'enhanced',
            excludePatterns = [],
            // Legacy parameters for backward compatibility
            projectPath,
            folderPath,
            useEmbeddings = false,
            embeddingSimilarityThreshold = 0.2,
          } = args;
    
          const excludeRegexes = compileExcludePatterns(excludePatterns);
    
          // Validate and resolve project path (now required)
          const resolvedProjectPath = validateAndResolvePath(projectPath);
    
          // Enhanced mode: Use new local_context implementation if query is provided
          if (query && (format === 'enhanced' || format === 'system-map') && !folderPath) {
            // Auto-enable embeddings when local storage is enabled and we have a query
            const localStorageEnabled = process.env.USE_LOCAL_EMBEDDINGS === 'true';
            const enhancedAvailable = EnhancedSemanticCompactor.isEnhancedModeAvailable();
    
            if (localStorageEnabled && enhancedAvailable) {
              try {
                logger.info('๐Ÿง  Auto-enabled embeddings (local storage active)', {
                  projectPath: resolvedProjectPath,
                  threshold: embeddingSimilarityThreshold,
                  maxChunks: maxSimilarChunks,
                });
    
                // Query-scoped file selection via quick AST pre-pass
                let filePatterns: string[] | undefined;
                try {
                  const { localContext } = await import('./enhancedLocalContext');
                  const prepass = await localContext({
                    projectPath: resolvedProjectPath,
                    query,
                    taskType: taskType as any,
                    maxSimilarChunks: Math.max(10, maxSimilarChunks),
                    maxTokens: Math.min(1500, maxTokens),
                    useProjectHintsCache,
                    attackPlan: attackPlan as any,
                    excludePatterns,
                  });
    
                  if (prepass?.jumpTargets?.length) {
                    const uniqueFiles = Array.from(
                      new Set(prepass.jumpTargets.map((t: any) => t.file).filter(Boolean))
                    );
                    filePatterns = uniqueFiles.map((absOrRel: string) => {
                      const rel = path.isAbsolute(absOrRel)
                        ? path.relative(resolvedProjectPath, absOrRel)
                        : absOrRel;
                      // Use exact file paths as patterns
                      return rel.replace(/\\/g, '/');
                    });
                    logger.info('๐Ÿ—‚๏ธ Query-scoped embedding file set prepared', {
                      files: filePatterns.slice(0, 10),
                      total: filePatterns.length,
                    });
                  }
                } catch (preErr) {
                  logger.warn('โš ๏ธ Query-scoped pre-pass failed; proceeding without scoped patterns', {
                    error: preErr instanceof Error ? preErr.message : String(preErr),
                  });
                }
    
                const enhancedResult = await enhancedSemanticCompactor.generateEnhancedContext({
                  projectPath: resolvedProjectPath,
                  maxTokens,
                  query,
                  taskType,
                  format,
                  useEmbeddings: true,
                  embeddingSimilarityThreshold,
                  maxSimilarChunks,
                  // Default to generating embeddings on first run unless explicitly disabled
                  generateEmbeddingsIfMissing: generateEmbeddingsIfMissing !== false,
                  embeddingOptions: {
                    batchSize: 48,
                    rateLimit: 0,
                    maxChunkSize: 1800,
                    filePatterns,
                  },
                  excludePatterns,
                });
    
                // Enforce final token cap on returned content
                const cappedContent = truncateToTokens(enhancedResult.content, maxTokens);
                const cappedTokens = estimateTokensShared(cappedContent);
                return {
                  success: true,
                  compactedContent: cappedContent,
                  metadata: {
                    originalTokens: Math.round(
                      (enhancedResult.metadata.tokenCount || 0) /
                        (enhancedResult.metadata.compressionRatio || 1)
                    ),
                    compactedTokens: cappedTokens,
                    compressionRatio: enhancedResult.metadata.compressionRatio || 1,
                    filesProcessed: enhancedResult.metadata.includedFiles || 0,
                    symbolsFound: 0,
                    symbolsAfterCompaction: 0,
                    processingTimeMs: 0,
                    format,
                    embeddingsUsed: enhancedResult.metadata.embeddingsUsed,
                    similarChunksFound: enhancedResult.metadata.similarChunksFound,
                  },
                  usage: `Enhanced context with ${enhancedResult.metadata.embeddingsUsed ? 'embeddings' : 'base compaction'}: ${cappedTokens} tokens (cap=${maxTokens})`,
                };
              } catch (error) {
                logger.warn('โš ๏ธ Auto-embedding path failed, falling back to enhanced AST mode', {
                  error: error instanceof Error ? error.message : String(error),
                });
              }
            }
    
            logger.info('๐Ÿš€ Using enhanced local context mode', {
              query,
              taskType,
              attackPlan,
              maxTokens,
              maxSimilarChunks,
            });
    
            try {
              const { localContext } = await import('./enhancedLocalContext');
    
              const enhancedResult = await localContext({
                projectPath: resolvedProjectPath,
                query,
                taskType: taskType as any,
                maxSimilarChunks,
                maxTokens,
                generateEmbeddingsIfMissing,
                useProjectHintsCache,
                astQueries,
                attackPlan: attackPlan as any,
                excludePatterns,
              });
    
              if (enhancedResult.success) {
                return {
                  success: true,
                  compactedContent: formatEnhancedContextOutput(enhancedResult, maxTokens),
                  metadata: enhancedResult.metadata,
                  usage: `Enhanced context analysis: ${enhancedResult.metadata.bundleTokens} tokens in ${enhancedResult.jumpTargets.length} locations`,
                  enhanced: true,
                  jumpTargets: enhancedResult.jumpTargets,
                  answerDraft: enhancedResult.answerDraft,
                  nextActions: enhancedResult.next,
                  evidence: enhancedResult.evidence,
                };
              } else {
                // Fall back to legacy mode if enhanced mode fails
                logger.warn('Enhanced mode failed, falling back to legacy mode');
              }
            } catch (error) {
              logger.warn('โš ๏ธ Enhanced local context failed, using legacy mode', {
                error: error instanceof Error ? error.message : String(error),
              });
            }
    
            // System Map mode: Use shared retriever to build architecture overview
            if (format === 'system-map') {
              try {
                logger.info('๐Ÿ—บ๏ธ Using System Map format', { query });
    
                // Use shared retriever to get relevant chunks
                const relevantChunks = await sharedRetriever.retrieve(query, 'overview');
    
                // Compose System Map
                const systemMap = await systemMapComposer.composeSystemMap(query, relevantChunks);
    
                // Format as markdown
                const systemMapMarkdown = formatSystemMapAsMarkdown(systemMap);
    
                // Estimate tokens and return
                const tokenCount = estimateTokensShared(systemMapMarkdown);
    
                return {
                  success: true,
                  compactedContent: truncateToTokens(systemMapMarkdown, maxTokens),
                  metadata: {
                    originalTokens: tokenCount,
                    compactedTokens: Math.min(tokenCount, maxTokens),
                    compressionRatio: 1.0,
                    filesProcessed: systemMap.metadata.totalChunksUsed,
                    symbolsFound: 0,
                    symbolsAfterCompaction: 0,
                    processingTimeMs: systemMap.metadata.processingTimeMs,
                    format: 'system-map',
                    coveragePct: systemMap.metadata.coveragePct,
                    anchorsHit: systemMap.metadata.anchorsHit,
                    queryFacets: systemMap.metadata.queryFacets,
                  },
                  usage: `System Map analysis: ${Math.min(tokenCount, maxTokens)} tokens with ${systemMap.metadata.coveragePct * 100}% coverage`,
                };
              } catch (error) {
                logger.warn('โš ๏ธ System Map generation failed, falling back to enhanced mode', {
                  error: error instanceof Error ? error.message : String(error),
                });
              }
            }
          }
    
          // resolvedProjectPath is already computed above
    
          logger.info('๐Ÿ”ง Starting local semantic compaction', {
            originalPath: projectPath,
            resolvedPath: resolvedProjectPath,
            folderPath,
            maxTokens,
            taskType,
            useEmbeddings,
            enhancedModeAvailable: EnhancedSemanticCompactor.isEnhancedModeAvailable(),
          });
    
          try {
            // Check if we should use enhanced compactor with embeddings
            const canUseEmbeddings =
              useEmbeddings &&
              query &&
              !folderPath && // Don't use embeddings for folder-specific analysis yet
              EnhancedSemanticCompactor.isEnhancedModeAvailable();
    
            if (canUseEmbeddings) {
              logger.info('๐Ÿš€ Using enhanced semantic compactor with embeddings', {
                query,
                threshold: embeddingSimilarityThreshold,
                maxChunks: maxSimilarChunks,
              });
    
              logger.debug('๐Ÿ”ง Calling enhanced semantic compactor with embeddings');
    
              const enhancedResult = await enhancedSemanticCompactor.generateEnhancedContext({
                projectPath: resolvedProjectPath,
                maxTokens,
                query,
                taskType,
                format,
                useEmbeddings: true,
                embeddingSimilarityThreshold,
                maxSimilarChunks,
                generateEmbeddingsIfMissing,
                excludePatterns,
              });
    
              logger.debug('๐Ÿ“Š Enhanced compactor result', {
                contentLength: enhancedResult.content?.length || 0,
                metadata: enhancedResult.metadata,
                metadataKeys: Object.keys(enhancedResult.metadata || {}),
              });
    
              // Return properly structured response for enhanced path
              const cappedContent = truncateToTokens(enhancedResult.content, maxTokens);
              const cappedTokens = estimateTokensShared(cappedContent);
              return {
                success: true,
                compactedContent: cappedContent,
                metadata: {
                  originalTokens: Math.round(
                    (enhancedResult.metadata.tokenCount || 0) /
                      (enhancedResult.metadata.compressionRatio || 1)
                  ),
                  compactedTokens: cappedTokens,
                  compressionRatio: enhancedResult.metadata.compressionRatio || 1,
                  filesProcessed: enhancedResult.metadata.includedFiles || 0,
                  symbolsFound: 0, // Enhanced compactor doesn't track symbols the same way
                  symbolsAfterCompaction: 0, // Enhanced compactor doesn't track symbols the same way
                  processingTimeMs: 0, // Could add timing to enhanced compactor
                  format,
                  embeddingsUsed: enhancedResult.metadata.embeddingsUsed,
                  similarChunksFound: enhancedResult.metadata.similarChunksFound,
                },
                usage: `Enhanced context with ${enhancedResult.metadata.embeddingsUsed ? 'embeddings' : 'base compaction'}: ${cappedTokens} tokens (cap=${maxTokens})`,
              };
            }
    
            // Fall back to standard semantic compaction
            logger.info('๐Ÿ“ Using standard semantic compaction', {
              reason: !canUseEmbeddings
                ? 'Enhanced mode not available or not requested'
                : 'Folder-specific analysis',
            });
            // Handle folder-specific analysis if folderPath is provided, or general analysis with exclude patterns
            let analysisPath = resolvedProjectPath;
            const cleanupTempDir: (() => Promise<void>) | null = null;
    
            if ((folderPath && folderPath !== '.') || excludeRegexes.length > 0) {
              const fs = require('fs').promises;
              const path = require('path');
              const os = require('os');
    
              const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'semantic-compact-'));
              logger.info('๐Ÿ“ Creating folder-specific analysis in temp directory', { tempDir });
    
              try {
                const { FileDiscovery } = await import('../../core/compactor/fileDiscovery.js');
                const fileDiscovery = new FileDiscovery(resolvedProjectPath, {
                  maxFileSize: 200000,
                });
    
                let allFiles = await fileDiscovery.discoverFiles();
                const originalFileCount = allFiles.length;
    
                if (excludeRegexes.length > 0) {
                  allFiles = allFiles.filter(file => !isExcludedPath(file.relPath, excludeRegexes));
    
                  logger.info('๐Ÿ“Š Applied exclude patterns to files', {
                    originalCount: originalFileCount,
                    filteredCount: allFiles.length,
                    excludedCount: originalFileCount - allFiles.length,
                    excludePatterns,
                  });
                }
    
                let filteredFiles = allFiles;
                if (folderPath && folderPath !== '.') {
                  let normalizedFolderPath = folderPath.replace(/[\/\\]/g, path.sep);
    
                  if (normalizedFolderPath.startsWith('.' + path.sep)) {
                    normalizedFolderPath = normalizedFolderPath.substring(2);
                  }
    
                  if (path.isAbsolute(normalizedFolderPath)) {
                    const relative = path.relative(resolvedProjectPath, normalizedFolderPath);
                    normalizedFolderPath = relative.startsWith('..')
                      ? path.basename(normalizedFolderPath)
                      : relative;
                  }
    
                  filteredFiles = allFiles.filter(file => {
                    const normalizedFilePath = file.relPath.replace(/[\/\\]/g, path.sep);
                    return (
                      normalizedFilePath.startsWith(normalizedFolderPath + path.sep) ||
                      normalizedFilePath === normalizedFolderPath
                    );
                  });
    
                  logger.info('๐Ÿ“ Folder-specific semantic compaction', {
                    originalFolderPath: folderPath,
                    normalizedFolderPath,
                    resolvedProjectPath,
                    totalFiles: allFiles.length,
                    filteredFiles: filteredFiles.length,
                    filesFound: filteredFiles.map(f => f.relPath).slice(0, 5),
                    sampleFiles: allFiles.slice(0, 5).map(f => f.relPath),
                  });
    
                  if (filteredFiles.length === 0) {
                    throw new Error(`No files found in folder: ${folderPath}`);
                  }
                }
    
                for (const file of filteredFiles) {
                  const sourcePath = file.absPath;
                  const targetPath = path.join(tempDir, file.relPath);
                  await fs.mkdir(path.dirname(targetPath), { recursive: true });
                  await fs.copyFile(sourcePath, targetPath);
                }
    
                analysisPath = tempDir;
    
                logger.info('๐Ÿ“ Temporary directory created for folder analysis', {
                  tempDir,
                  filesCopied: filteredFiles.length,
                });
    
                process.on('exit', () => {
                  fs.rm(tempDir, { recursive: true, force: true }).catch(() => {});
                });
                process.on('beforeExit', () => {
                  fs.rm(tempDir, { recursive: true, force: true }).catch(() => {});
                });
              } catch (error) {
                await fs.rm(tempDir, { recursive: true, force: true });
                throw error;
              }
            }
    
            // Create semantic compactor instance (self-contained, no external deps)
            const compactor = new SemanticCompactor(analysisPath, {
              maxTotalTokens: maxTokens,
              supportedLanguages: ['typescript', 'javascript', 'python', 'go', 'rust'],
              includeSourceCode: false, // Keep lightweight - just signatures and docs
              prioritizeExports: true,
              includeDocstrings: true,
            });
    
            // Create relevance context if query provided
            const relevanceContext = query
              ? {
                  query,
                  taskType,
                  maxTokens,
                }
              : undefined;
    
            // Process and compact - all local, no external API calls
            const result = await compactor.compact(relevanceContext);
    
            // Clean up resources
            compactor.dispose();
    
            // Clean up temp directory if it was created
            if (analysisPath !== resolvedProjectPath) {
              await require('fs').promises.rm(analysisPath, { recursive: true, force: true });
              logger.info('๐Ÿงน Cleaned up temporary directory', { tempDir: analysisPath });
            }
    
            const originalTokens = Math.round(result.totalTokens / result.compressionRatio);
    
            logger.info('โœ… Semantic compaction completed', {
              originalTokens,
              compactedTokens: result.totalTokens,
              compressionRatio: result.compressionRatio,
            });
    
            // Format the output based on preference
            let formattedContent = formatContextOutput(result, format, {
              originalTokens,
              query,
              taskType,
              projectPath: resolvedProjectPath,
            });
    
            // Enforce hard token cap on final content
            formattedContent = truncateToTokens(formattedContent, maxTokens);
    
            logger.debug('๐Ÿ—๏ธ Building final response metadata', {
              resultExists: !!result,
              resultType: typeof result,
              resultProcessingStats: result?.processingStats,
              resultTotalTokens: result?.totalTokens,
              resultCompressionRatio: result?.compressionRatio,
              originalTokens,
              filesProcessed_raw: result.processingStats?.filesProcessed,
              totalSymbols_raw: result.processingStats?.totalSymbols,
              symbolsAfterDeduplication_raw: result.processingStats?.symbolsAfterDeduplication,
              processingTimeMs_raw: result.processingStats?.processingTimeMs,
            });
    
            const responseMetadata = {
              originalTokens,
              compactedTokens: result.totalTokens || 0,
              compressionRatio: result.compressionRatio || 1,
              filesProcessed: result.processingStats?.filesProcessed || 0,
              symbolsFound: result.processingStats?.totalSymbols || 0,
              symbolsAfterCompaction: result.processingStats?.symbolsAfterDeduplication || 0,
              processingTimeMs: result.processingStats?.processingTimeMs || 0,
              format,
            };
    
            logger.debug('โœ… Final response metadata created', { metadata: responseMetadata });
    
            const finalTokens = estimateTokensShared(formattedContent);
            return {
              success: true,
              compactedContent: formattedContent,
              metadata: responseMetadata,
              usage: `Reduced context from ${originalTokens} to ${finalTokens} tokens (${Math.round((result.compressionRatio || 1) * 100)}% compression, cap=${maxTokens})`,
            };
          } catch (error) {
            logger.error('โŒ Semantic compaction failed', {
              error: error instanceof Error ? error.message : String(error),
            });
            return {
              success: false,
              error: error instanceof Error ? error.message : String(error),
              fallback: `Basic project context for ${projectPath} - semantic compaction failed. Try local_project_hints for navigation assistance.`,
            };
          }
        } catch (error) {
          logger.error('โŒ Enhanced handler execution failed', {
            error: error instanceof Error ? error.message : String(error),
          });
          throw error;
        }
      };
    
      const executionPromise = execute();
      inFlightRequests.set(singleFlightKey, executionPromise);
      try {
        const result = await executionPromise;
        return result;
      } finally {
        inFlightRequests.delete(singleFlightKey);
      }
    }
  • src/index.ts:135-142 (registration)
    Registers the 'local_context' handler in the MCP server's handler object, conditionally based on local embeddings or Ambiance API availability.
    this.handlers = {
      ...(allowLocalContext ? { local_context: handleSemanticCompact } : {}),
      local_project_hints: handleProjectHints,
      local_file_summary: handleFileSummary,
      frontend_insights: handleFrontendInsights,
      local_debug_context: handleLocalDebugContext,
      ast_grep_search: handleAstGrep,
    };
  • Core helper function performing deterministic, query-aware code retrieval using AST-grep, candidate ranking, and snippet assembly. Called by handleSemanticCompact in enhanced mode.
    export async function localContext(req: LocalContextRequest): Promise<LocalContextResponse> {
      const startTime = Date.now();
    
      logger.info('๐Ÿ” Enhanced local context request', {
        projectPath: req.projectPath,
        query: req.query,
        taskType: req.taskType,
        attackPlan: req.attackPlan,
        maxTokens: req.maxTokens,
      });
    
      // Validate that projectPath is provided
      if (!req.projectPath) {
        throw new Error(
          'โŒ projectPath is required. Please provide an absolute path to the project directory.'
        );
      }
    
      // Set defaults
      const request = {
        taskType: 'understand',
        maxSimilarChunks: 20,
        maxTokens: 3000,
        generateEmbeddingsIfMissing: false,
        useProjectHintsCache: true,
        attackPlan: 'auto',
        ...req,
      } as Required<LocalContextRequest>;
    
      try {
        // 1. Load project indices (reuse project_hints cache)
        const indices = await loadProjectIndices(request.projectPath, request.useProjectHintsCache);
    
        // 2. Choose attack plan
        const plan = chooseAttackPlan(request.attackPlan, request.query);
        const topic = detectTopic(request.query);
    
        // 3. Build DSL queries for this plan
        const dslQueries = buildDslQueriesForPlan(plan, request.query, request.astQueries);
    
        // 3.5 Topic-aware file prioritization and stoplist filtering
        const prioritizedFiles = prioritizeFilesForTopic(indices.files, topic);
    
        const customExcludePatterns = request.excludePatterns || [];
        const allExcludePatterns = [...UNIVERSAL_NEGATIVES, ...customExcludePatterns];
        const excludeMatchers = compileExcludePatterns(allExcludePatterns);
    
        const filteredFiles = prioritizedFiles.filter(
          file => !isExcludedPath(file.relPath, excludeMatchers)
        );
    
        // 4. Run AST queries to find matches
        const astMatches = await runAstQueries(filteredFiles, dslQueries);
    
        // 4.5 Family/topic-aware detectors beyond generic AST (no embeddings)
        const extraCandidates: CandidateSymbol[] = [];
        if (topic === 'api') {
          const apiExtras = await gatherApiRouteCandidates(filteredFiles);
          extraCandidates.push(...apiExtras);
        } else if (topic === 'components') {
          const compExtras = await gatherComponentCandidates(filteredFiles);
          extraCandidates.push(...compExtras);
        } else if (topic === 'db') {
          const dbExtras = await gatherDbSchemaCandidates(filteredFiles);
          extraCandidates.push(...dbExtras);
        }
    
        // 5. Generate and rank candidates
        const allMatches = [...astMatches, ...extraCandidates];
        const candidates = await rankCandidates(allMatches, indices, request.query, plan);
    
        // 6. Select top jump targets (respect maxSimilarChunks)
        let jumpTargets = selectJumpTargets(candidates, {
          max: Math.max(1, Math.min(request.maxSimilarChunks, 20)),
        });
    
        if (jumpTargets.length === 0 && candidates.length > 0) {
          const candidate = candidates[0];
          jumpTargets = [
            {
              file: candidate.file,
              symbol: candidate.symbol,
              start: candidate.start,
              end: candidate.end,
              role: candidate.role || inferRoleFromSymbol(candidate.symbol),
              confidence: candidate.score,
              why: candidate.reasons,
            },
          ];
        }
    
        // 7. Build mini-bundle with token budget
        const miniBundle = await buildMiniBundle(jumpTargets, indices.files, request.maxTokens);
    
        // 8. Generate deterministic answer draft
        const answerDraft = await generateDeterministicAnswer(
          plan,
          request.taskType,
          jumpTargets,
          indices
        );
    
        // 9. Compute next actions
        const nextActions = computeNextActions(jumpTargets, request.taskType);
    
        // 10. Build evidence list
        const evidence = buildEvidence(jumpTargets, astMatches);
    
        // 11. Build LLM-ready bundle with anchors/neighbors/env hints
        const llmBundle = buildLLMBundle({
          query: request.query,
          topic,
          ranked: candidates,
          files: indices.files,
          importGraph: await getOrBuildImportGraph(indices),
          envKeys: (indices.env || []).map((e: any) => e.key),
          fingerprint: await fingerprintRepo(indices.files),
        });
    
        const processingTimeMs = Date.now() - startTime;
    
        return {
          success: true,
          answerDraft,
          jumpTargets,
          miniBundle,
          next: nextActions,
          evidence,
          metadata: {
            filesScanned: indices.files.length,
            symbolsConsidered: candidates.length,
            originalTokens: 0,
            compactedTokens: 0,
            bundleTokens: miniBundle.reduce((sum, item) => sum + estimateTokensShared(item.snippet), 0),
            processingTimeMs,
          },
          llmBundle,
        };
      } catch (error) {
        logger.error('โŒ Enhanced local context failed', {
          error: error instanceof Error ? error.message : String(error),
          query: req.query,
        });
    
        return {
          success: false,
          answerDraft: `Unable to analyze query "${req.query}". ${error instanceof Error ? error.message : String(error)}`,
          jumpTargets: [],
          miniBundle: [],
          next: { mode: 'project_research', openFiles: [], checks: [] },
          evidence: [],
          metadata: {
            filesScanned: 0,
            symbolsConsidered: 0,
            originalTokens: 0,
            compactedTokens: 0,
            bundleTokens: 0,
            processingTimeMs: Date.now() - startTime,
          },
        };
      }
    }
  • Secondary registration in localTools module index, re-exporting handlers for import convenience.
    export const localHandlers = {
      ...(allowLocalContext ? { local_context: handleSemanticCompact } : {}),
      local_project_hints: handleProjectHints,
      local_file_summary: handleFileSummary,
      frontend_insights: handleFrontendInsights,
      local_debug_context: handleLocalDebugContext,
      manage_embeddings: handleManageEmbeddings,
      ast_grep_search: handleAstGrep,
    };
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's 'completely offline with zero external dependencies for core functionality', uses 'AST + static heuristics', provides specific outputs (AnswerDraft, ranked JumpTargets, etc.), and mentions optional embedding enhancement. However, it doesn't cover error handling, performance characteristics, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with a high-level summary and listing four key outputs. It uses emojis and technical terms efficiently, though some phrases like 'actionable intelligence' are vague. Every sentence contributes, but it could be slightly more streamlined by integrating the offline note earlier.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (14 parameters, no annotations, no output schema), the description is moderately complete. It covers core functionality, offline nature, and output types, but lacks details on return values, error cases, or how outputs are structured. For a sophisticated analysis tool, more behavioral context would be helpful despite the rich schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schemaโ€”it mentions 'deterministic query-aware retrieval' (hinting at the 'query' parameter) and 'tight MiniBundle (โ‰ค3k tokens)' (relating to 'maxTokens'), but doesn't significantly enhance understanding of the 14 parameters. The value added is marginal given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'enhanced local context with deterministic query-aware retrieval, AST-grep, and actionable intelligence' and lists four specific outputs (AnswerDraft, JumpTargets, MiniBundle, NextActions). It distinguishes from siblings by emphasizing AST-based analysis and offline functionality, though it doesn't explicitly contrast with tools like 'local_debug_context' or 'local_file_summary'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for code analysis tasks ('deterministic query-aware retrieval') and mentions optional embedding enhancement, but lacks explicit guidance on when to use this tool versus alternatives like 'ast_grep_search' or 'local_debug_context'. It states 'completely offline with zero external dependencies' which provides some context but not clear when/when-not rules.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sbarron/AmbianceMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server