local_context
Analyzes local codebases to provide focused context for understanding, debugging, or tracing code. Uses AST parsing and query-aware retrieval to deliver relevant code snippets and actionable insights without external dependencies.
Instructions
๐ Enhanced local context with deterministic query-aware retrieval, AST-grep, and actionable intelligence. Provides: (1) deterministic AnswerDraft, (2) ranked JumpTargets, (3) tight MiniBundle (โค3k tokens), (4) NextActionsโall using AST + static heuristics. Optional embedding enhancement when available. Completely offline with zero external dependencies for core functionality.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Query to focus analysis (required for enhanced mode). Example: "How does database connection and local database storage work?" | |
| taskType | No | Type of analysis task - affects query processing and output format | understand |
| maxSimilarChunks | No | Maximum number of semantically similar code chunks to retrieve. Higher values (30-50) provide broader coverage for exploration; lower values (10-15) focus on highly relevant matches. Default 20 balances breadth and relevance. | |
| maxTokens | No | Token budget for mini-bundle assembly | |
| generateEmbeddingsIfMissing | No | Generate embeddings if missing (requires OpenAI API key) - leave false for pure AST mode | |
| useProjectHintsCache | No | Reuse project_hints indices for faster processing | |
| astQueries | No | Optional custom AST queries to supplement automatic detection | |
| attackPlan | No | Analysis strategy: auto-detect from query, or specify: init-read-write (DB/storage), api-route (endpoints), auth (authentication), error-driven (debugging) | auto |
| projectPath | Yes | Project directory path. Required. Can be absolute or relative to workspace. | |
| folderPath | No | Analyze specific folder (falls back to legacy mode if enhanced analysis unavailable) | |
| format | No | Output format: enhanced (new format with jump targets), system-map (architecture overview), structured (legacy), compact, xml | enhanced |
| excludePatterns | No | Additional patterns to exclude from analysis (e.g., ["*.md", "docs/**", "*.test.js"]) | |
| useEmbeddings | No | Use embeddings for similarity search if available (legacy parameter) | |
| embeddingSimilarityThreshold | No | Minimum similarity score (0.0-1.0) for including chunks. Lower values (0.15-0.2) cast a wider net for related code; higher values (0.25-0.35) return only close matches. Use lower thresholds when exploring unfamiliar code. |
Input Schema (JSON Schema)
Implementation Reference
- Main handler function that executes the local_context tool. Handles input validation, single-flight deduping, enhanced AST-based retrieval via localContext(), embedding-enhanced compaction if available, fallback to pure AST/local compactor, formatting, and token capping.export async function handleSemanticCompact(args: any): Promise<any> { // Validate that projectPath is provided if (!args?.projectPath) { throw new Error( 'โ projectPath is required. Please provide an absolute path to the project directory.' ); } // Compose a single-flight key early to dedupe duplicate concurrent calls const singleFlightKey = (() => { try { const q = (args?.query || '').toString().slice(0, 200); const p = validateAndResolvePath(args.projectPath); const f = args?.format || 'enhanced'; return `${path.resolve(p)}::${f}::${q}`; } catch { return `default-key`; } })(); if (inFlightRequests.has(singleFlightKey)) { logger.info('๐ Single-flight: returning existing in-flight result for local_context', { key: singleFlightKey, }); return inFlightRequests.get(singleFlightKey)!; } const execute = async () => { try { const { // New enhanced parameters query, taskType = 'understand', maxSimilarChunks = 20, maxTokens = 3000, // If undefined, we will auto-generate embeddings when missing generateEmbeddingsIfMissing, useProjectHintsCache = true, astQueries = [], attackPlan = 'auto', format = 'enhanced', excludePatterns = [], // Legacy parameters for backward compatibility projectPath, folderPath, useEmbeddings = false, embeddingSimilarityThreshold = 0.2, } = args; const excludeRegexes = compileExcludePatterns(excludePatterns); // Validate and resolve project path (now required) const resolvedProjectPath = validateAndResolvePath(projectPath); // Enhanced mode: Use new local_context implementation if query is provided if (query && (format === 'enhanced' || format === 'system-map') && !folderPath) { // Auto-enable embeddings when local storage is enabled and we have a query const localStorageEnabled = process.env.USE_LOCAL_EMBEDDINGS === 'true'; const enhancedAvailable = EnhancedSemanticCompactor.isEnhancedModeAvailable(); if (localStorageEnabled && enhancedAvailable) { try { logger.info('๐ง Auto-enabled embeddings (local storage active)', { projectPath: resolvedProjectPath, threshold: embeddingSimilarityThreshold, maxChunks: maxSimilarChunks, }); // Query-scoped file selection via quick AST pre-pass let filePatterns: string[] | undefined; try { const { localContext } = await import('./enhancedLocalContext'); const prepass = await localContext({ projectPath: resolvedProjectPath, query, taskType: taskType as any, maxSimilarChunks: Math.max(10, maxSimilarChunks), maxTokens: Math.min(1500, maxTokens), useProjectHintsCache, attackPlan: attackPlan as any, excludePatterns, }); if (prepass?.jumpTargets?.length) { const uniqueFiles = Array.from( new Set(prepass.jumpTargets.map((t: any) => t.file).filter(Boolean)) ); filePatterns = uniqueFiles.map((absOrRel: string) => { const rel = path.isAbsolute(absOrRel) ? path.relative(resolvedProjectPath, absOrRel) : absOrRel; // Use exact file paths as patterns return rel.replace(/\\/g, '/'); }); logger.info('๐๏ธ Query-scoped embedding file set prepared', { files: filePatterns.slice(0, 10), total: filePatterns.length, }); } } catch (preErr) { logger.warn('โ ๏ธ Query-scoped pre-pass failed; proceeding without scoped patterns', { error: preErr instanceof Error ? preErr.message : String(preErr), }); } const enhancedResult = await enhancedSemanticCompactor.generateEnhancedContext({ projectPath: resolvedProjectPath, maxTokens, query, taskType, format, useEmbeddings: true, embeddingSimilarityThreshold, maxSimilarChunks, // Default to generating embeddings on first run unless explicitly disabled generateEmbeddingsIfMissing: generateEmbeddingsIfMissing !== false, embeddingOptions: { batchSize: 48, rateLimit: 0, maxChunkSize: 1800, filePatterns, }, excludePatterns, }); // Enforce final token cap on returned content const cappedContent = truncateToTokens(enhancedResult.content, maxTokens); const cappedTokens = estimateTokensShared(cappedContent); return { success: true, compactedContent: cappedContent, metadata: { originalTokens: Math.round( (enhancedResult.metadata.tokenCount || 0) / (enhancedResult.metadata.compressionRatio || 1) ), compactedTokens: cappedTokens, compressionRatio: enhancedResult.metadata.compressionRatio || 1, filesProcessed: enhancedResult.metadata.includedFiles || 0, symbolsFound: 0, symbolsAfterCompaction: 0, processingTimeMs: 0, format, embeddingsUsed: enhancedResult.metadata.embeddingsUsed, similarChunksFound: enhancedResult.metadata.similarChunksFound, }, usage: `Enhanced context with ${enhancedResult.metadata.embeddingsUsed ? 'embeddings' : 'base compaction'}: ${cappedTokens} tokens (cap=${maxTokens})`, }; } catch (error) { logger.warn('โ ๏ธ Auto-embedding path failed, falling back to enhanced AST mode', { error: error instanceof Error ? error.message : String(error), }); } } logger.info('๐ Using enhanced local context mode', { query, taskType, attackPlan, maxTokens, maxSimilarChunks, }); try { const { localContext } = await import('./enhancedLocalContext'); const enhancedResult = await localContext({ projectPath: resolvedProjectPath, query, taskType: taskType as any, maxSimilarChunks, maxTokens, generateEmbeddingsIfMissing, useProjectHintsCache, astQueries, attackPlan: attackPlan as any, excludePatterns, }); if (enhancedResult.success) { return { success: true, compactedContent: formatEnhancedContextOutput(enhancedResult, maxTokens), metadata: enhancedResult.metadata, usage: `Enhanced context analysis: ${enhancedResult.metadata.bundleTokens} tokens in ${enhancedResult.jumpTargets.length} locations`, enhanced: true, jumpTargets: enhancedResult.jumpTargets, answerDraft: enhancedResult.answerDraft, nextActions: enhancedResult.next, evidence: enhancedResult.evidence, }; } else { // Fall back to legacy mode if enhanced mode fails logger.warn('Enhanced mode failed, falling back to legacy mode'); } } catch (error) { logger.warn('โ ๏ธ Enhanced local context failed, using legacy mode', { error: error instanceof Error ? error.message : String(error), }); } // System Map mode: Use shared retriever to build architecture overview if (format === 'system-map') { try { logger.info('๐บ๏ธ Using System Map format', { query }); // Use shared retriever to get relevant chunks const relevantChunks = await sharedRetriever.retrieve(query, 'overview'); // Compose System Map const systemMap = await systemMapComposer.composeSystemMap(query, relevantChunks); // Format as markdown const systemMapMarkdown = formatSystemMapAsMarkdown(systemMap); // Estimate tokens and return const tokenCount = estimateTokensShared(systemMapMarkdown); return { success: true, compactedContent: truncateToTokens(systemMapMarkdown, maxTokens), metadata: { originalTokens: tokenCount, compactedTokens: Math.min(tokenCount, maxTokens), compressionRatio: 1.0, filesProcessed: systemMap.metadata.totalChunksUsed, symbolsFound: 0, symbolsAfterCompaction: 0, processingTimeMs: systemMap.metadata.processingTimeMs, format: 'system-map', coveragePct: systemMap.metadata.coveragePct, anchorsHit: systemMap.metadata.anchorsHit, queryFacets: systemMap.metadata.queryFacets, }, usage: `System Map analysis: ${Math.min(tokenCount, maxTokens)} tokens with ${systemMap.metadata.coveragePct * 100}% coverage`, }; } catch (error) { logger.warn('โ ๏ธ System Map generation failed, falling back to enhanced mode', { error: error instanceof Error ? error.message : String(error), }); } } } // resolvedProjectPath is already computed above logger.info('๐ง Starting local semantic compaction', { originalPath: projectPath, resolvedPath: resolvedProjectPath, folderPath, maxTokens, taskType, useEmbeddings, enhancedModeAvailable: EnhancedSemanticCompactor.isEnhancedModeAvailable(), }); try { // Check if we should use enhanced compactor with embeddings const canUseEmbeddings = useEmbeddings && query && !folderPath && // Don't use embeddings for folder-specific analysis yet EnhancedSemanticCompactor.isEnhancedModeAvailable(); if (canUseEmbeddings) { logger.info('๐ Using enhanced semantic compactor with embeddings', { query, threshold: embeddingSimilarityThreshold, maxChunks: maxSimilarChunks, }); logger.debug('๐ง Calling enhanced semantic compactor with embeddings'); const enhancedResult = await enhancedSemanticCompactor.generateEnhancedContext({ projectPath: resolvedProjectPath, maxTokens, query, taskType, format, useEmbeddings: true, embeddingSimilarityThreshold, maxSimilarChunks, generateEmbeddingsIfMissing, excludePatterns, }); logger.debug('๐ Enhanced compactor result', { contentLength: enhancedResult.content?.length || 0, metadata: enhancedResult.metadata, metadataKeys: Object.keys(enhancedResult.metadata || {}), }); // Return properly structured response for enhanced path const cappedContent = truncateToTokens(enhancedResult.content, maxTokens); const cappedTokens = estimateTokensShared(cappedContent); return { success: true, compactedContent: cappedContent, metadata: { originalTokens: Math.round( (enhancedResult.metadata.tokenCount || 0) / (enhancedResult.metadata.compressionRatio || 1) ), compactedTokens: cappedTokens, compressionRatio: enhancedResult.metadata.compressionRatio || 1, filesProcessed: enhancedResult.metadata.includedFiles || 0, symbolsFound: 0, // Enhanced compactor doesn't track symbols the same way symbolsAfterCompaction: 0, // Enhanced compactor doesn't track symbols the same way processingTimeMs: 0, // Could add timing to enhanced compactor format, embeddingsUsed: enhancedResult.metadata.embeddingsUsed, similarChunksFound: enhancedResult.metadata.similarChunksFound, }, usage: `Enhanced context with ${enhancedResult.metadata.embeddingsUsed ? 'embeddings' : 'base compaction'}: ${cappedTokens} tokens (cap=${maxTokens})`, }; } // Fall back to standard semantic compaction logger.info('๐ Using standard semantic compaction', { reason: !canUseEmbeddings ? 'Enhanced mode not available or not requested' : 'Folder-specific analysis', }); // Handle folder-specific analysis if folderPath is provided, or general analysis with exclude patterns let analysisPath = resolvedProjectPath; const cleanupTempDir: (() => Promise<void>) | null = null; if ((folderPath && folderPath !== '.') || excludeRegexes.length > 0) { const fs = require('fs').promises; const path = require('path'); const os = require('os'); const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'semantic-compact-')); logger.info('๐ Creating folder-specific analysis in temp directory', { tempDir }); try { const { FileDiscovery } = await import('../../core/compactor/fileDiscovery.js'); const fileDiscovery = new FileDiscovery(resolvedProjectPath, { maxFileSize: 200000, }); let allFiles = await fileDiscovery.discoverFiles(); const originalFileCount = allFiles.length; if (excludeRegexes.length > 0) { allFiles = allFiles.filter(file => !isExcludedPath(file.relPath, excludeRegexes)); logger.info('๐ Applied exclude patterns to files', { originalCount: originalFileCount, filteredCount: allFiles.length, excludedCount: originalFileCount - allFiles.length, excludePatterns, }); } let filteredFiles = allFiles; if (folderPath && folderPath !== '.') { let normalizedFolderPath = folderPath.replace(/[\/\\]/g, path.sep); if (normalizedFolderPath.startsWith('.' + path.sep)) { normalizedFolderPath = normalizedFolderPath.substring(2); } if (path.isAbsolute(normalizedFolderPath)) { const relative = path.relative(resolvedProjectPath, normalizedFolderPath); normalizedFolderPath = relative.startsWith('..') ? path.basename(normalizedFolderPath) : relative; } filteredFiles = allFiles.filter(file => { const normalizedFilePath = file.relPath.replace(/[\/\\]/g, path.sep); return ( normalizedFilePath.startsWith(normalizedFolderPath + path.sep) || normalizedFilePath === normalizedFolderPath ); }); logger.info('๐ Folder-specific semantic compaction', { originalFolderPath: folderPath, normalizedFolderPath, resolvedProjectPath, totalFiles: allFiles.length, filteredFiles: filteredFiles.length, filesFound: filteredFiles.map(f => f.relPath).slice(0, 5), sampleFiles: allFiles.slice(0, 5).map(f => f.relPath), }); if (filteredFiles.length === 0) { throw new Error(`No files found in folder: ${folderPath}`); } } for (const file of filteredFiles) { const sourcePath = file.absPath; const targetPath = path.join(tempDir, file.relPath); await fs.mkdir(path.dirname(targetPath), { recursive: true }); await fs.copyFile(sourcePath, targetPath); } analysisPath = tempDir; logger.info('๐ Temporary directory created for folder analysis', { tempDir, filesCopied: filteredFiles.length, }); process.on('exit', () => { fs.rm(tempDir, { recursive: true, force: true }).catch(() => {}); }); process.on('beforeExit', () => { fs.rm(tempDir, { recursive: true, force: true }).catch(() => {}); }); } catch (error) { await fs.rm(tempDir, { recursive: true, force: true }); throw error; } } // Create semantic compactor instance (self-contained, no external deps) const compactor = new SemanticCompactor(analysisPath, { maxTotalTokens: maxTokens, supportedLanguages: ['typescript', 'javascript', 'python', 'go', 'rust'], includeSourceCode: false, // Keep lightweight - just signatures and docs prioritizeExports: true, includeDocstrings: true, }); // Create relevance context if query provided const relevanceContext = query ? { query, taskType, maxTokens, } : undefined; // Process and compact - all local, no external API calls const result = await compactor.compact(relevanceContext); // Clean up resources compactor.dispose(); // Clean up temp directory if it was created if (analysisPath !== resolvedProjectPath) { await require('fs').promises.rm(analysisPath, { recursive: true, force: true }); logger.info('๐งน Cleaned up temporary directory', { tempDir: analysisPath }); } const originalTokens = Math.round(result.totalTokens / result.compressionRatio); logger.info('โ Semantic compaction completed', { originalTokens, compactedTokens: result.totalTokens, compressionRatio: result.compressionRatio, }); // Format the output based on preference let formattedContent = formatContextOutput(result, format, { originalTokens, query, taskType, projectPath: resolvedProjectPath, }); // Enforce hard token cap on final content formattedContent = truncateToTokens(formattedContent, maxTokens); logger.debug('๐๏ธ Building final response metadata', { resultExists: !!result, resultType: typeof result, resultProcessingStats: result?.processingStats, resultTotalTokens: result?.totalTokens, resultCompressionRatio: result?.compressionRatio, originalTokens, filesProcessed_raw: result.processingStats?.filesProcessed, totalSymbols_raw: result.processingStats?.totalSymbols, symbolsAfterDeduplication_raw: result.processingStats?.symbolsAfterDeduplication, processingTimeMs_raw: result.processingStats?.processingTimeMs, }); const responseMetadata = { originalTokens, compactedTokens: result.totalTokens || 0, compressionRatio: result.compressionRatio || 1, filesProcessed: result.processingStats?.filesProcessed || 0, symbolsFound: result.processingStats?.totalSymbols || 0, symbolsAfterCompaction: result.processingStats?.symbolsAfterDeduplication || 0, processingTimeMs: result.processingStats?.processingTimeMs || 0, format, }; logger.debug('โ Final response metadata created', { metadata: responseMetadata }); const finalTokens = estimateTokensShared(formattedContent); return { success: true, compactedContent: formattedContent, metadata: responseMetadata, usage: `Reduced context from ${originalTokens} to ${finalTokens} tokens (${Math.round((result.compressionRatio || 1) * 100)}% compression, cap=${maxTokens})`, }; } catch (error) { logger.error('โ Semantic compaction failed', { error: error instanceof Error ? error.message : String(error), }); return { success: false, error: error instanceof Error ? error.message : String(error), fallback: `Basic project context for ${projectPath} - semantic compaction failed. Try local_project_hints for navigation assistance.`, }; } } catch (error) { logger.error('โ Enhanced handler execution failed', { error: error instanceof Error ? error.message : String(error), }); throw error; } }; const executionPromise = execute(); inFlightRequests.set(singleFlightKey, executionPromise); try { const result = await executionPromise; return result; } finally { inFlightRequests.delete(singleFlightKey); } }
- Tool definition including name 'local_context' and complete inputSchema for validation.export const localSemanticCompactTool = { name: 'local_context', description: '๐ Enhanced local context with deterministic query-aware retrieval, AST-grep, and actionable intelligence. Provides: (1) deterministic AnswerDraft, (2) ranked JumpTargets, (3) tight MiniBundle (โค3k tokens), (4) NextActionsโall using AST + static heuristics. Optional embedding enhancement when available. Completely offline with zero external dependencies for core functionality.', inputSchema: { type: 'object', properties: { query: { type: 'string', description: 'Query to focus analysis (required for enhanced mode). Example: "How does database connection and local database storage work?"', }, taskType: { type: 'string', enum: ['understand', 'debug', 'trace', 'spec', 'test'], default: 'understand', description: 'Type of analysis task - affects query processing and output format', }, maxSimilarChunks: { type: 'number', default: 20, minimum: 5, maximum: 50, description: 'Maximum number of semantically similar code chunks to retrieve. Higher values (30-50) provide broader coverage for exploration; lower values (10-15) focus on highly relevant matches. Default 20 balances breadth and relevance.', }, maxTokens: { type: 'number', default: 3000, minimum: 1000, maximum: 8000, description: 'Token budget for mini-bundle assembly', }, generateEmbeddingsIfMissing: { type: 'boolean', default: false, description: 'Generate embeddings if missing (requires OpenAI API key) - leave false for pure AST mode', }, useProjectHintsCache: { type: 'boolean', default: true, description: 'Reuse project_hints indices for faster processing', }, astQueries: { type: 'array', items: { type: 'object' }, description: 'Optional custom AST queries to supplement automatic detection', }, attackPlan: { type: 'string', enum: ['auto', 'init-read-write', 'api-route', 'error-driven', 'auth'], default: 'auto', description: 'Analysis strategy: auto-detect from query, or specify: init-read-write (DB/storage), api-route (endpoints), auth (authentication), error-driven (debugging)', }, projectPath: { type: 'string', description: 'Project directory path. Required. Can be absolute or relative to workspace.', }, folderPath: { type: 'string', description: 'Analyze specific folder (falls back to legacy mode if enhanced analysis unavailable)', }, format: { type: 'string', enum: ['xml', 'structured', 'compact', 'enhanced', 'system-map'], default: 'enhanced', description: 'Output format: enhanced (new format with jump targets), system-map (architecture overview), structured (legacy), compact, xml', }, excludePatterns: { type: 'array', items: { type: 'string' }, description: 'Additional patterns to exclude from analysis (e.g., ["*.md", "docs/**", "*.test.js"])', }, useEmbeddings: { type: 'boolean', default: false, description: 'Use embeddings for similarity search if available (legacy parameter)', }, embeddingSimilarityThreshold: { type: 'number', default: 0.2, minimum: 0.0, maximum: 1.0, description: 'Minimum similarity score (0.0-1.0) for including chunks. Lower values (0.15-0.2) cast a wider net for related code; higher values (0.25-0.35) return only close matches. Use lower thresholds when exploring unfamiliar code.', }, }, required: ['query', 'projectPath'], }, };
- src/index.ts:135-142 (registration)Registers the local_context handler and tool conditionally based on allowLocalContext (local embeddings or Ambiance API).this.handlers = { ...(allowLocalContext ? { local_context: handleSemanticCompact } : {}), local_project_hints: handleProjectHints, local_file_summary: handleFileSummary, frontend_insights: handleFrontendInsights, local_debug_context: handleLocalDebugContext, ast_grep_search: handleAstGrep, };
- Core helper function called by handleSemanticCompact for enhanced deterministic retrieval using AST-grep, candidate ranking, snippet assembly, and template-based answers.export async function localContext(req: LocalContextRequest): Promise<LocalContextResponse> { const startTime = Date.now(); logger.info('๐ Enhanced local context request', { projectPath: req.projectPath, query: req.query, taskType: req.taskType, attackPlan: req.attackPlan, maxTokens: req.maxTokens, }); // Validate that projectPath is provided if (!req.projectPath) { throw new Error( 'โ projectPath is required. Please provide an absolute path to the project directory.' ); } // Set defaults const request = { taskType: 'understand', maxSimilarChunks: 20, maxTokens: 3000, generateEmbeddingsIfMissing: false, useProjectHintsCache: true, attackPlan: 'auto', ...req, } as Required<LocalContextRequest>; try { // 1. Load project indices (reuse project_hints cache) const indices = await loadProjectIndices(request.projectPath, request.useProjectHintsCache); // 2. Choose attack plan const plan = chooseAttackPlan(request.attackPlan, request.query); const topic = detectTopic(request.query); // 3. Build DSL queries for this plan const dslQueries = buildDslQueriesForPlan(plan, request.query, request.astQueries); // 3.5 Topic-aware file prioritization and stoplist filtering const prioritizedFiles = prioritizeFilesForTopic(indices.files, topic); const customExcludePatterns = request.excludePatterns || []; const allExcludePatterns = [...UNIVERSAL_NEGATIVES, ...customExcludePatterns]; const excludeMatchers = compileExcludePatterns(allExcludePatterns); const filteredFiles = prioritizedFiles.filter( file => !isExcludedPath(file.relPath, excludeMatchers) ); // 4. Run AST queries to find matches const astMatches = await runAstQueries(filteredFiles, dslQueries); // 4.5 Family/topic-aware detectors beyond generic AST (no embeddings) const extraCandidates: CandidateSymbol[] = []; if (topic === 'api') { const apiExtras = await gatherApiRouteCandidates(filteredFiles); extraCandidates.push(...apiExtras); } else if (topic === 'components') { const compExtras = await gatherComponentCandidates(filteredFiles); extraCandidates.push(...compExtras); } else if (topic === 'db') { const dbExtras = await gatherDbSchemaCandidates(filteredFiles); extraCandidates.push(...dbExtras); } // 5. Generate and rank candidates const allMatches = [...astMatches, ...extraCandidates]; const candidates = await rankCandidates(allMatches, indices, request.query, plan); // 6. Select top jump targets (respect maxSimilarChunks) let jumpTargets = selectJumpTargets(candidates, { max: Math.max(1, Math.min(request.maxSimilarChunks, 20)), }); if (jumpTargets.length === 0 && candidates.length > 0) { const candidate = candidates[0]; jumpTargets = [ { file: candidate.file, symbol: candidate.symbol, start: candidate.start, end: candidate.end, role: candidate.role || inferRoleFromSymbol(candidate.symbol), confidence: candidate.score, why: candidate.reasons, }, ]; } // 7. Build mini-bundle with token budget const miniBundle = await buildMiniBundle(jumpTargets, indices.files, request.maxTokens); // 8. Generate deterministic answer draft const answerDraft = await generateDeterministicAnswer( plan, request.taskType, jumpTargets, indices ); // 9. Compute next actions const nextActions = computeNextActions(jumpTargets, request.taskType); // 10. Build evidence list const evidence = buildEvidence(jumpTargets, astMatches); // 11. Build LLM-ready bundle with anchors/neighbors/env hints const llmBundle = buildLLMBundle({ query: request.query, topic, ranked: candidates, files: indices.files, importGraph: await getOrBuildImportGraph(indices), envKeys: (indices.env || []).map((e: any) => e.key), fingerprint: await fingerprintRepo(indices.files), }); const processingTimeMs = Date.now() - startTime; return { success: true, answerDraft, jumpTargets, miniBundle, next: nextActions, evidence, metadata: { filesScanned: indices.files.length, symbolsConsidered: candidates.length, originalTokens: 0, compactedTokens: 0, bundleTokens: miniBundle.reduce((sum, item) => sum + estimateTokensShared(item.snippet), 0), processingTimeMs, }, llmBundle, }; } catch (error) { logger.error('โ Enhanced local context failed', { error: error instanceof Error ? error.message : String(error), query: req.query, }); return { success: false, answerDraft: `Unable to analyze query "${req.query}". ${error instanceof Error ? error.message : String(error)}`, jumpTargets: [], miniBundle: [], next: { mode: 'project_research', openFiles: [], checks: [] }, evidence: [], metadata: { filesScanned: 0, symbolsConsidered: 0, originalTokens: 0, compactedTokens: 0, bundleTokens: 0, processingTimeMs: Date.now() - startTime, }, }; } }
- src/tools/localTools/index.ts:129-137 (registration)Re-exports and registers local_context tool and handler for import into src/index.ts.export const localHandlers = { ...(allowLocalContext ? { local_context: handleSemanticCompact } : {}), local_project_hints: handleProjectHints, local_file_summary: handleFileSummary, frontend_insights: handleFrontendInsights, local_debug_context: handleLocalDebugContext, manage_embeddings: handleManageEmbeddings, ast_grep_search: handleAstGrep, };