We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/artk0de/TeaRAGs-MCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
{"id":"tea-rags-mcp-1d1","title":"Add worker_threads ChunkerPool for parallel AST parsing","status":"closed","priority":2,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T19:48:32.51819+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T20:32:04.49369+03:00","closed_at":"2026-02-04T20:32:04.49369+03:00","close_reason":"ChunkerPool with worker_threads implemented. 4 worker threads by default, simple first-free dispatch, integrated into indexCodebase() and reindexChanges(). 7 tests, all 1059 tests pass, clean build.","labels":["performance"],"dependencies":[{"issue_id":"tea-rags-mcp-1d1","depends_on_id":"tea-rags-mcp-lmk","type":"blocks","created_at":"2026-02-04T19:48:37.527354+03:00","created_by":"Artur Korochanskii"}]}
{"id":"tea-rags-mcp-2cr","title":"EMBEDDING_BATCH_SIZE должен быть размером аккумулятора pipeline, а не разбивать внутри ollama","description":"Сейчас EMBEDDING_BATCH_SIZE используется внутри ollama.ts для разбивки входящего массива на HTTP-батчи. Нужно вынести батчинг на уровень pipeline accumulator: EMBEDDING_BATCH_SIZE управляет размером буфера BatchAccumulator (вместо CODE_BATCH_SIZE). Внутри ollama embedBatch — просто отправлять всё что пришло как один запрос.","status":"closed","priority":1,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T18:21:27.892457+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T18:40:39.044042+03:00","closed_at":"2026-02-04T18:40:39.044042+03:00","close_reason":"Closed","labels":["performance"]}
{"id":"tea-rags-mcp-4zh","title":"Refactor indexer into modules","status":"open","priority":3,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-11T19:05:11.057498+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-11T19:05:11.057498+03:00"}
{"id":"tea-rags-mcp-5a5","title":"DX: Auto-remind to update docs on tool changes","description":"Problem: Claude doesn't update README and documentation automatically.\n\nSolution: PostToolUse hook that reminds about docs when src/tools/ or schemas.ts changes.\n\nFiles to create:\n- .claude/hooks/check-docs-needed.sh\n- Update .claude/settings.json with PostToolUse hook","status":"closed","priority":1,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T23:57:13.278336+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:01:18.685663+03:00","closed_at":"2026-02-03T00:01:18.685663+03:00","close_reason":"Created .claude/hooks/check-docs-needed.sh and .claude/settings.json with PostToolUse hook","labels":["dx"]}
{"id":"tea-rags-mcp-74o","title":"Add importedBy for blast radius","description":"## Goal\nTrack reverse dependencies (who imports this file) for true blast radius.\n\n## Implementation\n```typescript\n// After all files processed\nconst importedBy = new Map\u003cstring, string[]\u003e();\n\nfor (const [file, imports] of allImports) {\n for (const imp of imports) {\n const resolved = resolveImportPath(imp, file);\n if (resolved) {\n if (!importedBy.has(resolved)) importedBy.set(resolved, []);\n importedBy.get(resolved).push(file);\n }\n }\n}\n\n// Update Qdrant payloads\n```\n\n## Import Resolution\n- Relative: ./utils → src/utils.ts or src/utils/index.ts\n- Aliases: @/ → tsconfig.paths resolution\n- External: ignore (node_modules)\n\n## Schema Changes\n- Add `importedBy: string[]` to payload\n- Add `importedByCount: number` for easy filtering\n\n## Estimated Cost\n- MVP (relative only): 2-3 hours, ~70% coverage\n- Full (with aliases): 1-2 days, ~95% coverage","status":"open","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T02:00:47.597423+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T02:00:47.597423+03:00","labels":["metrics"]}
{"id":"tea-rags-mcp-75k","title":"Add git data caching","description":"## Goal\nCache git analysis results for fast incremental updates.\n\n## Cache Structure\n```typescript\ninterface GitCache {\n version: number;\n headCommit: string;\n indexedAt: number;\n fileChurn: Record\u003cstring, number\u003e;\n blameData: Record\u003cstring, BlameResult\u003e;\n dependencyGraph: {\n imports: Record\u003cstring, string[]\u003e;\n importedBy: Record\u003cstring, string[]\u003e;\n };\n}\n```\n\n## Cache Location\n```\n.tea-rags/\n├── cache/\n│ └── \u003ccollection-hash\u003e.json\n└── config.json\n```\n\n## Invalidation Strategy\n```typescript\nif (cache.headCommit !== currentHead) {\n const changedFiles = getChangedFiles(cache.headCommit, currentHead);\n // Only reprocess changed files\n // Update affected dependency graph nodes\n}\n```\n\n## Expected Speedup\n- No changes: 10-100x faster (just verify HEAD)\n- Few changes: 5-20x faster (incremental)","status":"open","priority":3,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T02:00:59.425133+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T02:00:59.425133+03:00","labels":["performance"],"dependencies":[{"issue_id":"tea-rags-mcp-75k","depends_on_id":"tea-rags-mcp-h4v","type":"blocks","created_at":"2026-02-03T02:03:38.317553+03:00","created_by":"Artur Korochanskii"}]}
{"id":"tea-rags-mcp-9n5","title":"Add lazy enrichment fallback","description":"## Goal\nOn-demand git data fetching when rerank preset needs it but data is missing.\n\n## Use Cases (Fallback)\n- Background enrichment not yet completed\n- Enrichment failed for some files\n- Minimal storage mode (don't pre-compute unused metrics)\n\n## Implementation\n```typescript\nasync function rerank(results: SearchResult[], preset: string) {\n const needsGitData = ['techDebt', 'hotspots', 'codeReview', 'ownership'].includes(preset);\n const needsImportedBy = preset === 'impactAnalysis';\n \n // Check if enrichment data exists\n const missingGit = needsGitData \u0026\u0026 results.some(r =\u003e \\!r.payload.fileChurnCount);\n const missingDeps = needsImportedBy \u0026\u0026 results.some(r =\u003e \\!r.payload.importedBy);\n \n if (missingGit || missingDeps) {\n // On-demand enrichment for these specific files\n const filesToEnrich = results\n .filter(r =\u003e \\!r.payload.fileChurnCount || \\!r.payload.importedBy)\n .map(r =\u003e r.payload.relativePath);\n \n await enrichFiles(filesToEnrich, { git: missingGit, deps: missingDeps });\n \n // Re-fetch results with enriched data\n return refetchAndRerank(results, preset);\n }\n \n return applyRerank(results, preset);\n}\n```\n\n## Notes\n- Low priority - two-phase indexing covers 95% of cases\n- Adds latency to first query if data missing (~200-500ms)\n- Consider caching enriched data for subsequent queries\n\n## Not blocking main roadmap - future optimization only","status":"open","priority":4,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T02:05:29.274627+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T02:05:29.274627+03:00","labels":["architecture"]}
{"id":"tea-rags-mcp-b32","title":"ChunkerPool: smarter work distribution strategy","description":"Current ChunkerPool uses simple first-free-worker dispatch. Potential improvements:\n- Least-busy worker selection (track queue depth per worker)\n- File-size-aware dispatch (large files to less loaded workers)\n- Language affinity (same language to same worker — warm parser cache)\n- Adaptive pool sizing based on CPU load\n\nLow priority — current simple dispatch already solves the main bottleneck (serialized AST parsing). Only worth exploring if profiling shows uneven worker utilization.","status":"open","priority":4,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T20:31:58.56751+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T20:31:58.56751+03:00","labels":["scaling"]}
{"id":"tea-rags-mcp-bsf","title":"Обновить README и PERFORMANCE_TUNING: новые имена переменных, добавить BATCH_FORMATION_TIMEOUT_MS","description":"1. Заменить CODE_BATCH_SIZE на QDRANT_UPSERT_BATCH_SIZE в README и PERFORMANCE_TUNING. 2. Добавить BATCH_FORMATION_TIMEOUT_MS в таблицу переменных README и в PERFORMANCE_TUNING. 3. Исправить дефолты EMBEDDING_CONCURRENCY (4, не 1). 4. Описать чётко разницу между EMBEDDING_BATCH_SIZE (аккумулятор чанков для эмбеддинга) и QDRANT_UPSERT_BATCH_SIZE (буфер upsert в Qdrant).","status":"closed","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T18:21:31.336964+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T19:20:33.439896+03:00","closed_at":"2026-02-04T19:20:33.439896+03:00","close_reason":"Closed","labels":["docs"]}
{"id":"tea-rags-mcp-dpz","title":"impactAnalysis не работает: imports не сохраняется при индексации","description":"extractImportsExports() в metadata.ts не вызывается при индексации. Поле imports не попадает в payload чанков. Из-за этого impactAnalysis preset даёт score = similarity/2 вместо учёта зависимостей.","status":"closed","priority":2,"issue_type":"bug","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T00:25:19.901764+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:45:02.184127+03:00","closed_at":"2026-02-03T00:45:02.184127+03:00","close_reason":"Implemented: imports now extracted at file level and stored in chunk payload","labels":["bugfix"]}
{"id":"tea-rags-mcp-h4v","title":"Implement two-phase indexing","description":"## Goal\nMake search available quickly, enrich metadata in background.\n\n## Architecture\n```\nPhase 1 (blocking, fast):\n├── Tree-sitter parse + chunk\n├── Complexity (same AST pass)\n├── Collect imports\n├── Batch embeddings\n├── Upsert to Qdrant\n└── → SEARCHABLE\n\nPhase 2 (background worker):\n├── Bulk git log → churn map\n├── Invert imports → importedBy\n├── Qdrant payload updates\n└── Cache for next run\n```\n\n## Implementation\n```typescript\nasync function indexCodebase(path: string) {\n // Phase 1\n const chunks = await parseAndChunk(path);\n await generateEmbeddings(chunks);\n await upsertToQdrant(chunks);\n \n console.log('✓ Searchable');\n \n // Phase 2 (non-blocking)\n setImmediate(async () =\u003e {\n const gitData = await collectGitData(path);\n const depGraph = buildDependencyGraph(chunks);\n await enrichPayloads(gitData, depGraph);\n console.log('✓ Enrichment complete');\n });\n}\n```\n\n## API Changes\n- Add `enrichmentStatus` to index_codebase response\n- Add `get_enrichment_status` tool","status":"closed","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T02:00:54.788736+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-11T18:26:48.696745+03:00","closed_at":"2026-02-11T18:26:48.696745+03:00","close_reason":"Two-phase indexing implemented: git blame decoupled from embedding pipeline","labels":["scaling"]}
{"id":"tea-rags-mcp-hfy","title":"Optimize git blame performance (87% of indexing time)","description":"Git blame is the dominant bottleneck: 6.2M ms (87.1%) of total indexing time across 1927 files.\n\n## Current Architecture\n- Per-file: git blame --porcelain -w + git log (2 subprocesses)\n- 50 concurrent file workers = up to 100 parallel git processes\n- All hit the same .git/objects/pack → I/O contention\n- L1 (memory) + L2 (disk) cache by content hash — helps on reindex, not first run\n\n## Optimization Strategies (ordered by effort/impact)\n\n### 1. GIT_BLAME_CONCURRENCY semaphore (low effort, high impact)\nSeparate concurrency limit for git subprocesses (default 10).\nFile parsing continues at 50 concurrency, but blame waits in queue.\nReduces pack file I/O contention.\n\n### 2. Lazy git log for commit bodies (low effort, medium impact)\ngit log -- \u003cfile\u003e currently runs for ALL files (for taskIds from merge commits).\nMake it lazy: only run git log if blame summary returned empty taskIds.\nEliminates ~50% of git subprocesses.\n\n### 3. Replace git blame with git log for file-level metadata (medium effort, high impact)\nMost metrics don't need line-level attribution:\n- commitCount, authors, taskIds, lastModifiedAt, firstCreatedAt = file-level\n- Only dominantAuthor per chunk needs line-level blame\nFast path: git log --format=... -- \u003cfile\u003e for everything except dominantAuthor.\nOptional slow path: git blame only when dominantAuthor accuracy matters.\n\n### 4. Build own git index + delta updates (medium effort, very high impact)\nFirst indexing: one pass git log --all --format=... --name-only → build own structure:\n Map\u003cfilePath, { authors, commits, lastModifiedAt, firstCreatedAt, taskIds }\u003e\nSave to ~/.tea-rags-mcp/git-cache/blame-index.json + remember lastIndexedCommit SHA.\nReindexing: git log \u003clastIndexedCommit\u003e..HEAD --format=... --name-only\nOne process, only new commits → delta update only affected files.\nTrade-off: lose line-level dominantAuthor (file-level instead). Acceptable — 87% time for one field.\n\n### 5. libgit2 / nodegit (preferred long-term solution)\nEliminate subprocess spawning entirely via native git bindings.\nDirect in-process packfile access — no I/O contention from concurrent processes.\nnodegit or libgit2 wasm bindings for blame + log.\nThis is the preferred approach as it solves all subprocess-related issues at once:\n- No fork/exec overhead\n- No pack file contention from multiple processes\n- Direct memory access to git objects\n- Can implement custom blame with early termination\nMajor refactor but the cleanest architectural solution.\n\n## Metrics to Beat\n- Current: 87.1% of indexing time (6.2M ms accumulated, ~124s wall clock)\n- Target: \u003c30% of indexing time\n- Benchmark: reindex_changes on ~1900 files with CODE_ENABLE_GIT_METADATA=true","status":"closed","priority":2,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T21:09:01.667197+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-11T19:23:12.888976+03:00","closed_at":"2026-02-11T19:23:12.888976+03:00","close_reason":"Parallel git blame implemented: producer-consumer buffer overlaps blame with embedding. Wall time ~max(embed, blame) instead of ~sum.","labels":["performance"]}
{"id":"tea-rags-mcp-iir","title":"Rename CODE_BATCH_SIZE → QDRANT_UPSERT_BATCH_SIZE с обратной совместимостью","description":"Переименовать CODE_BATCH_SIZE в QDRANT_UPSERT_BATCH_SIZE везде в коде. Оставить fallback на старое имя. Обновить types.ts, accumulator.ts, index.ts, debug-logger.ts. Убрать использование CODE_BATCH_SIZE из pipeline accumulator — он должен использовать EMBEDDING_BATCH_SIZE.","status":"closed","priority":1,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T18:21:26.741148+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T18:40:39.042041+03:00","closed_at":"2026-02-04T18:40:39.042041+03:00","close_reason":"Closed","labels":["performance"]}
{"id":"tea-rags-mcp-iwk","title":"MCP Prompts system for quick teaRAGs agent integration","description":"Добавить систему MCP prompts для:\n\n1. **Быстрой интеграции teaRAGs в агентов** - готовые промпты для Claude/других LLM агентов с инструкциями по использованию tea-rags MCP\n\n2. **Коррекции анализа кода** - промпты-шаблоны для:\n - Уточнения контекста поиска\n - Фильтрации результатов по git metadata\n - Выбора правильного rerank preset\n\n3. **Генерации кода** - промпты для:\n - Создания кода на основе найденных паттернов\n - Рефакторинга с учётом codebase context\n - Соблюдения code style из существующего кода\n\n4. **Явных триггеров** - система triggers/actions:\n - Автоматический выбор search_code vs semantic_search\n - Предзаданные фильтры для типичных сценариев (tech debt, hotspots, ownership)\n - Комбинированные workflows (search → analyze → suggest)\n\nПримеры MCP prompts:\n- `explain_code` - поиск + объяснение найденного кода\n- `find_similar` - найти похожие паттерны в codebase\n- `suggest_refactor` - анализ + предложения по рефакторингу\n- `check_ownership` - git blame analytics для code review","status":"open","priority":3,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-05T01:42:13.448723+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-05T01:42:13.448723+03:00","labels":["dx"]}
{"id":"tea-rags-mcp-khs","title":"Add scoring/reranking and metaOnly parameters to search tools","description":"## Overview\nAdd reranking capabilities to search tools with unified interface.\n\n## Unified Parameter: `rerank`\n\nSame parameter name for both tools, different presets per use case.\n\n### For semantic_search (аналитика)\n\n```typescript\nrerank?: \n | \"relevance\" // default: similarity only\n | \"techDebt\" // старый код + часто ломается\n | \"hotspots\" // bug hunting: high churn + recent\n | \"codeReview\" // свежие изменения\n | \"onboarding\" // точки входа, документация, стабильный код\n | \"securityAudit\" // старый код в критических местах\n | \"refactoring\" // кандидаты на рефакторинг\n | \"ownership\" // knowledge transfer: кто эксперт\n | \"impactAnalysis\" // что затронет изменение\n | { custom: ScoringWeights }\n```\n\n### Аналитические задачи агента (semantic_search)\n\n| Пресет | Задача | Что важно | Сигналы |\n|--------|--------|-----------|---------|\n| `\"techDebt\"` | Tech debt | Старый код + часто ломается | high ageDays + high churn |\n| `\"hotspots\"` | Bug hunting | Проблемные места | high churn + recent changes |\n| `\"codeReview\"` | Code review | Свежие изменения | low ageDays |\n| `\"onboarding\"` | Onboarding | Точки входа, документация | isDocumentation + low churn |\n| `\"securityAudit\"` | Security audit | Старый код в критических местах | high ageDays + specific paths |\n| `\"refactoring\"` | Refactoring | Кандидаты на рефакторинг | high churn + large chunks |\n| `\"ownership\"` | Knowledge transfer | Кто эксперт | author concentration |\n| `\"impactAnalysis\"` | Impact analysis | Что затронет изменение | high dependencies/imports |\n\n### Scoring Formulas (semantic_search)\n\n**\"relevance\"** (default):\n`score = similarity`\n\n**\"techDebt\"**:\n`score = similarity × 0.4 + normalized(ageDays) × 0.3 + normalized(churn) × 0.3`\n\n**\"hotspots\"**:\n`score = similarity × 0.5 + normalized(churn) × 0.3 + normalized(recency) × 0.2`\n\n**\"codeReview\"**:\n`score = similarity × 0.6 + normalized(recency) × 0.4`\n\n**\"onboarding\"**:\n`score = similarity × 0.4 + isDocumentation × 0.3 + normalized(stability) × 0.3`\n\n**\"securityAudit\"**:\n`score = similarity × 0.5 + normalized(ageDays) × 0.3 + pathRisk × 0.2`\n(pathRisk: boost for auth/, security/, crypto/, etc.)\n\n**\"refactoring\"**:\n`score = similarity × 0.4 + normalized(churn) × 0.3 + normalized(chunkSize) × 0.3`\n\n**\"ownership\"**:\n`score = similarity × 0.5 + authorConcentration × 0.5`\n\n**\"impactAnalysis\"**:\n`score = similarity × 0.5 + normalized(importCount) × 0.5`\n\n---\n\n### For search_code (практическая разработка)\n\n```typescript\nrerank?:\n | \"relevance\" // default: similarity only \n | \"recent\" // свежий код по теме\n | \"stable\" // стабильная реализация для примера\n | { custom: ScoringWeights }\n```\n\n### Practical Use Cases for search_code\n\n| Сценарий | rerank | Результат |\n|----------|--------|-----------|\n| \"Как реализована авторизация?\" | не указан / \"relevance\" | Чистый semantic search |\n| \"Свежий код по этой теме\" | \"recent\" | Boost недавно изменённому коду |\n| \"Стабильная реализация для примера\" | \"stable\" | Boost коду с низким churn |\n| \"Что менял Вася в этой области?\" | filter: author=\"Вася\" | Фильтр (уже есть) |\n\n### Scoring Formulas (search_code)\n\n**\"relevance\"** (default):\n`score = similarity`\n\n**\"recent\"**:\n`score = similarity × 0.7 + normalized(recency) × 0.3`\n\n**\"stable\"**:\n`score = similarity × 0.7 + normalized(stability) × 0.3`\n\n---\n\n### Custom Weights (advanced, both tools)\n\n```typescript\ninterface ScoringWeights {\n similarity?: number; // default 1.0\n recency?: number; // inverse ageDays (0-1)\n stability?: number; // inverse commitCount (0-1)\n churn?: number; // direct commitCount (0-1)\n age?: number; // direct ageDays (0-1)\n ownership?: number; // author concentration (0-1)\n chunkSize?: number; // lines of code (0-1)\n documentation?: number; // isDocumentation boost\n imports?: number; // import/dependency count\n}\n```\n\n---\n\n## Part 2: metaOnly Parameter\n\nSame for both tools:\n```typescript\nmetaOnly?: boolean // default: false\n```\n\nReturns only metadata without content (for file discovery).\n\n## Files to modify\n- src/tools/schemas.ts - add RerankMode schema (shared + tool-specific)\n- src/tools/search.ts - implement rerank in semantic_search\n- src/tools/code.ts - implement rerank in search_code \n- src/code/reranker.ts (new) - shared reranking logic\n- src/qdrant/client.ts - add payloadSelector for metaOnly","notes":"## metaOnly clarification\n\nmetaOnly parameter - ONLY for semantic_search (analytics), NOT for search_code.\n\n**Use case:** Agent needs to analyze codebase structure without loading content:\n- \"Найди все файлы связанные с auth\" → получить список путей\n- \"Какие файлы часто меняются?\" → получить git metadata\n- Building file lists for batch operations\n\n**Response when metaOnly=true:**\n```json\n{\n \"score\": 0.87,\n \"relativePath\": \"src/services/auth.ts\",\n \"startLine\": 45,\n \"endLine\": 89,\n \"language\": \"typescript\",\n \"chunkType\": \"function\",\n \"name\": \"handleLogin\",\n \"git\": {\n \"ageDays\": 5,\n \"commitCount\": 12,\n \"dominantAuthor\": \"alice\"\n }\n}\n```\n\nNo `content` field - reduces response size significantly.","status":"closed","priority":1,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T20:39:17.177843+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:03:45.965857+03:00","closed_at":"2026-02-03T00:03:45.965857+03:00","close_reason":"Implemented reranking (presets + custom weights) and metaOnly parameters for search tools","labels":["api"],"dependencies":[{"issue_id":"tea-rags-mcp-khs","depends_on_id":"tea-rags-mcp-lb9.4","type":"blocks","created_at":"2026-02-02T21:03:55.870932+03:00","created_by":"Artur Korochanskii"},{"issue_id":"tea-rags-mcp-khs","depends_on_id":"tea-rags-mcp-lb9.3","type":"blocks","created_at":"2026-02-02T21:04:07.040905+03:00","created_by":"Artur Korochanskii"}]}
{"id":"tea-rags-mcp-kql","title":"Add MCP instructions for agent best practices","description":"Add server instructions to MCP initialize response so Claude Code automatically receives best practices for using tea-rags.\n\n## Context\nMCP protocol supports `instructions` field in server options. When client calls `initialize`, server returns instructions that client adds to system prompt.\n\n```typescript\nconst server = new McpServer({\n name: pkg.name,\n version: pkg.version,\n}, {\n instructions: \"Best practices for tea-rags...\"\n});\n```\n\n## Implementation Options\n\n### Option A: Static instructions (hardcoded)\n- Pros: Simple, always available, no config needed\n- Cons: Can't customize per-project, bloats every session\n\n### Option B: Load from file (INSTRUCTIONS.md)\n- Pros: User can customize, version controlled\n- Cons: Need to ship default file, user must manage\n\n### Option C: Dynamic based on indexed projects\n- Pros: Project-specific (collection names, paths, languages)\n- Cons: Complex, needs state, may not have projects at init time\n\n### Option D: Hybrid - static core + dynamic project info\n- Static: decision matrix, anti-patterns, pipeline workflow\n- Dynamic: add indexed project details when available\n- Pros: Best of both, always has basics\n- Cons: More complex implementation\n\n### Option E: Environment variable to enable/customize\n- `TEA_RAGS_INSTRUCTIONS=true` - use default\n- `TEA_RAGS_INSTRUCTIONS=/path/to/file` - use custom\n- Pros: Opt-in, flexible\n- Cons: Extra config step\n\n## Content to include (based on working CLAUDE.local.md)\n\n1. Decision matrix: search_code vs semantic_search vs hybrid_search\n2. Mandatory workflow: tea-rags → verify → read\n3. Rerank presets and when to use\n4. Git metadata filters (author, age, churn, taskId)\n5. Anti-patterns (what NOT to do)\n6. When to use metaOnly\n7. pathPattern examples\n\n## References\n- Working example: /Users/artk0re/Dev/Job/taxdome/CLAUDE.local.md\n- SDK: node_modules/@modelcontextprotocol/sdk/dist/esm/server/index.js (lines 50, 279)","status":"open","priority":2,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-05T01:54:43.002805+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-05T01:54:43.002805+03:00","labels":["dx"]}
{"id":"tea-rags-mcp-l30","title":"MAX_TOTAL_CHUNKS not set for tests","description":"MAX_TOTAL_CHUNKS environment variable is not configured for test runs.\n\nThis may cause:\n- Tests using default/production chunk limits\n- Inconsistent test behavior\n- Slow tests due to processing too many chunks\n\nSolution:\n- Set MAX_TOTAL_CHUNKS in test setup/config\n- Or mock chunk limiting in tests\n- Consider adding to jest.setup.ts or vitest.config.ts","status":"closed","priority":2,"issue_type":"bug","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T23:43:19.090786+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:15:03.996575+03:00","closed_at":"2026-02-03T00:15:03.996575+03:00","close_reason":"Closed","labels":["bugfix"]}
{"id":"tea-rags-mcp-lb9","title":"Улучшения search_code API","description":"Улучшение функциональности search_code: glob patterns, git metadata reranking, параметры path/pathPattern","status":"closed","priority":2,"issue_type":"epic","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T19:47:47.040996+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:15:25.125627+03:00","closed_at":"2026-02-03T00:15:25.125627+03:00","close_reason":"All subtasks completed: pathPattern glob, git metadata reranking","labels":["api"]}
{"id":"tea-rags-mcp-lb9.1","title":"Проверить работу pathPattern с glob шаблонами","description":"Убедиться что pathPattern аргумент в MCP tool search_code действительно работает с glob шаблонами. Написать тесты если нужно.","status":"closed","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T19:47:51.321397+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-02T20:09:08.53135+03:00","closed_at":"2026-02-02T20:09:08.53135+03:00","close_reason":"Исследовано: glob работает через regex replace, но Qdrant text match ограничивает. Нужен настоящий glob matching.","labels":["api"],"dependencies":[{"issue_id":"tea-rags-mcp-lb9.1","depends_on_id":"tea-rags-mcp-lb9","type":"parent-child","created_at":"2026-02-02T19:47:51.321996+03:00","created_by":"Artur Korochanskii"}]}
{"id":"tea-rags-mcp-lb9.2","title":"Изучить и добавить реранкинг по git metadata","description":"Исследовать возможность реранкинга результатов поиска по git metadata (commit count, age, author и т.д.). Реализовать если feasible.","status":"closed","priority":2,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T19:47:52.818055+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-02T20:09:09.093977+03:00","closed_at":"2026-02-02T20:09:09.093977+03:00","close_reason":"Исследовано: git metadata хранится, нужен post-search reranking. Предложена архитектура.","labels":["api"],"dependencies":[{"issue_id":"tea-rags-mcp-lb9.2","depends_on_id":"tea-rags-mcp-lb9","type":"parent-child","created_at":"2026-02-02T19:47:52.818635+03:00","created_by":"Artur Korochanskii"}]}
{"id":"tea-rags-mcp-lb9.3","title":"Добавить параметры path и pathPattern в search_code","description":"1. Добавить параметр path (по аналогии с другими эндпоинтами) - автоматически конвертируется в collection\n2. Добавить параметр pathPattern для фильтрации файлов по GLOB паттерну","status":"closed","priority":1,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T19:47:54.304714+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-02T23:57:42.847411+03:00","closed_at":"2026-02-02T23:57:42.847411+03:00","close_reason":"Completed: path already existed, pathPattern added with picomatch glob filtering","labels":["api"],"dependencies":[{"issue_id":"tea-rags-mcp-lb9.3","depends_on_id":"tea-rags-mcp-lb9.4","type":"blocks","created_at":"2026-02-02T20:56:06.672321+03:00","created_by":"Artur Korochanskii"}]}
{"id":"tea-rags-mcp-lb9.4","title":"Исправить glob matching и унифицировать pathPattern","description":"## Проблема\n\n### Qdrant filter НЕ поддерживает glob\n- Только exact match, substring (text), match any\n- Нет regex, нет wildcard, нет glob\n- `**/workflow/**` не работает\n\n### search_code\n- pathPattern конвертируется в regex → Qdrant text match\n- Работает нестабильно (text match ≠ regex)\n\n### semantic_search\n- pathPattern отсутствует\n- Агент не может фильтровать по домену: `**/workflow/**`\n\n## Use Case\n\nПоиск по домену — найти все файлы домена workflow:\n- `models/workflow/*`\n- `services/workflow/*` \n- `controllers/workflow/*`\n\n**Нужен паттерн:** `**/workflow/**`\n\n## Решение: pathPattern с клиентской фильтрацией\n\n### Архитектура\n\n```\nsrc/qdrant/\n├── client.ts # существующий QdrantManager\n├── filters/\n│ ├── index.ts # public API (re-exports)\n│ └── glob.ts # glob matching logic\n```\n\n**Принципы:**\n- DRY — один модуль для всех MCP tools\n- Модульность — изолирован в домене Qdrant\n- Без зависимостей от других модулей проекта\n\n### API (функциональный подход)\n\n```typescript\n// src/qdrant/filters/glob.ts\nimport picomatch from 'picomatch';\n\n/**\n * Creates a matcher function for glob pattern\n */\nexport function createGlobMatcher(pattern: string): (path: string) =\u003e boolean {\n return picomatch(pattern, { bash: true });\n}\n\n/**\n * Filters search results by glob pattern on relativePath\n * Used for post-filtering Qdrant results when glob not supported natively\n */\nexport function filterResultsByGlob\u003cT extends { payload?: { relativePath?: string } }\u003e(\n results: T[],\n pattern: string,\n): T[] {\n const isMatch = createGlobMatcher(pattern);\n return results.filter(item =\u003e {\n const path = item.payload?.relativePath;\n return path \u0026\u0026 isMatch(path);\n });\n}\n\n// src/qdrant/filters/index.ts\nexport { createGlobMatcher, filterResultsByGlob } from './glob.js';\n```\n\n### Интерфейс MCP tools (одинаковый)\n\n```typescript\npathPattern?: string // Glob pattern: \"**/workflow/**\", \"src/**/*.ts\"\n```\n\n### semantic_search — добавить pathPattern\n\n```typescript\n// schema\n{\n collection: string,\n query: string,\n limit?: number,\n filter?: Record\u003cstring, any\u003e, // raw Qdrant filter (без изменений)\n pathPattern?: string, // NEW: glob фильтрация на клиенте\n}\n\n// implementation\nimport { filterResultsByGlob } from '../qdrant/filters/index.js';\n\nconst fetchLimit = pathPattern ? (limit || 5) * 3 : (limit || 5);\nconst results = await qdrant.search(collection, embedding, fetchLimit, filter);\nconst filtered = pathPattern \n ? filterResultsByGlob(results, pathPattern)\n : results;\nreturn filtered.slice(0, limit || 5);\n```\n\n### search_code — исправить pathPattern\n\n```typescript\n// Убрать glob→regex конвертацию\n// Использовать filterResultsByGlob\n\nimport { filterResultsByGlob } from '../qdrant/filters/index.js';\n\nconst fetchLimit = pathPattern ? limit * 3 : limit;\nconst results = await qdrant.search(...); // без path filter в Qdrant\nconst filtered = pathPattern\n ? filterResultsByGlob(results, pathPattern)\n : results;\nreturn filtered.slice(0, limit);\n```\n\n## Примеры использования\n\n```typescript\n// Найти всё по домену workflow\npathPattern: \"**/workflow/**\"\n\n// Только TypeScript в src\npathPattern: \"src/**/*.ts\"\n\n// Несколько папок (brace expansion)\npathPattern: \"{models,services}/workflow/**\"\n```\n\n## Файлы\n\n### Новые\n- src/qdrant/filters/index.ts\n- src/qdrant/filters/glob.ts\n\n### Изменения\n- package.json — добавить picomatch, @types/picomatch\n- src/code/indexer.ts — убрать glob→regex, использовать filterResultsByGlob\n- src/tools/schemas.ts — добавить pathPattern в SemanticSearchSchema\n- src/tools/search.ts — применить filterResultsByGlob","status":"closed","priority":0,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T20:55:56.681866+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-02T21:18:31.278593+03:00","closed_at":"2026-02-02T21:18:31.278593+03:00","close_reason":"Реализовано: picomatch модуль в src/qdrant/filters/, pathPattern добавлен в semantic_search, исправлен в search_code. 100% coverage.","labels":["api"]}
{"id":"tea-rags-mcp-lmk","title":"Add minBatchSize to BatchAccumulator","status":"closed","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T19:48:31.938452+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T20:21:04.575821+03:00","closed_at":"2026-02-04T20:21:04.575821+03:00","close_reason":"Added minBatchSize to BatchAccumulatorConfig with two-phase timeout: first timeout defers if below min, second timeout forces flush. 7 new tests, all 1045 tests pass, clean build.","labels":["performance"]}
{"id":"tea-rags-mcp-mhw","title":"DX: Add coverage check to Stop hook","description":"Problem: No coverage verification in Claude workflow or githooks.\n\nSolution:\n1. Stop hook that checks coverage \u003e= threshold\n2. Optional: husky pre-commit hook\n\nFiles to create:\n- .claude/hooks/check-coverage.sh\n- Update .claude/settings.json\n- Optional: .husky/pre-commit","status":"closed","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T23:57:17.274317+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:02:34.542599+03:00","closed_at":"2026-02-03T00:02:34.542599+03:00","close_reason":"Created .claude/hooks/check-coverage.sh (70% threshold by default) and updated .husky/pre-commit with optional coverage check (CHECK_COVERAGE=1)","labels":["dx"]}
{"id":"tea-rags-mcp-mmt","title":"Multi-tenant architecture: centralized index with local overlay","description":"Cursor-like multi-tenant embedding index for tea-rags (thin client model).\n\n## Architecture\n\n### Central Index (Server)\n- Full embedding index built from default branch (main)\n- Обновляется через CI/CD webhook или по расписанию\n- Read-only для клиентов — клиенты только забирают результаты, никогда не пушат\n- API: semantic search endpoint с OAuth-авторизацией\n- Хранилище: Qdrant (или совместимое vector DB)\n\n### Local Delta (Client)\n- Клиент определяет base commit центрального индекса\n- git diff между base commit и рабочим деревом → список изменённых файлов\n- Только эти файлы индексируются локально (embeddings хранятся на клиенте)\n- При checkout/reset/stash/stash pop → инвалидация затронутых локальных эмбеддингов\n\n### Search Flow\n1. Клиент отправляет запрос на центральный сервер → получает результаты\n2. Клиент делает поиск по локальной дельте\n3. Merge:\n - Результаты из центрального индекса для изменённых файлов → заменяются локальными\n - Результаты для удалённых файлов → исключаются\n - Результаты для новых файлов → добавляются из локального индекса\n4. Dedup + rerank объединённого набора на клиенте\n\n### Auth\n- OAuth2: custom IdP, Google, Okta\n- Token-based API access\n- Rate limiting per user/org\n\n### Git-aware Invalidation\n- Отслеживаемые события: checkout, reset, stash, stash pop, pull, merge\n- Механизм: git hooks или fs watcher на .git/HEAD + .git/refs\n- Base commit tracking: клиент знает какой commit был последним проиндексирован на сервере\n\n## Key Properties\n- Локальный код никогда не покидает клиент\n- Центральный индекс — единственный source of truth для закоммиченного кода\n- Минимальный storage на клиенте (только delta embeddings)\n- Latency: один сетевой запрос (к центральному) + локальный поиск по дельте\n\n## Open Questions\n- Как клиент узнаёт base commit центрального индекса? (API endpoint? manifest file?)\n- Формат локального хранилища дельты (in-memory vs sqlite vs qdrant-lite)?\n- Нужен ли partial sync центрального индекса при переключении веток?\n- Как обрабатывать большие PR с 100+ изменёнными файлами?","notes":"## Reusability Analysis (2026-02-04)\n\n~70% current code reusable for thin-client multi-tenant.\n\n### 100% reuse (as-is)\n- Chunker (tree-sitter AST): src/code/chunker/\n- Metadata extractor: src/code/metadata.ts\n- Reranker (all presets): src/code/reranker.ts\n- Embedding providers: src/embeddings/\n- BM25 sparse vectors: src/embeddings/sparse.ts\n- Qdrant filters: src/qdrant/filters/\n- Tool schemas: src/tools/schemas.ts\n\n### 80-95% reuse (minor adaptation)\n- Qdrant client (needs dual instance: remote+local): src/qdrant/\n- ChunkPipeline (batching reusable): src/code/pipeline/\n- GitMetadataService (add getChangedFilesSinceCommit): src/code/git/\n\n### 40% reuse (major refactor)\n- CodeIndexer → split into CentralIndexer + DeltaIndexer + MergedSearcher\n\n### 0% reuse (replace entirely)\n- Sync/Snapshots system (replace with git diff based delta detection)\n\n### New components needed\n1. Git-based delta detection (git diff --name-only baseCommit..HEAD)\n2. Search merger (merge + dedup central + delta results)\n3. Base commit tracking (server ↔ client sync)\n4. OAuth2 auth layer\n5. Server search API endpoint\n\n### Key insight\nCurrent delta indexing uses content-hash + mtime, NOT git.\nFor thin-client model, delta = git diff from central index's base commit.","status":"open","priority":4,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T05:34:12.592954+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T05:40:16.358009+03:00","labels":["scaling"]}
{"id":"tea-rags-mcp-nyg","title":"Add complexity metrics","description":"## Goal\nCalculate cyclomatic and cognitive complexity from existing AST.\n\n## Implementation\n```typescript\nfunction calculateComplexity(node: SyntaxNode): { cyclomatic: number; cognitive: number } {\n let cyclomatic = 1;\n let cognitive = 0;\n let nestingLevel = 0;\n \n const branchTypes = ['if_statement', 'for_statement', 'while_statement', \n 'switch_case', 'catch_clause', 'conditional_expression'];\n \n function traverse(n: SyntaxNode, depth: number) {\n if (branchTypes.includes(n.type)) {\n cyclomatic++;\n cognitive += 1 + depth; // nesting penalty\n }\n for (const child of n.children) {\n traverse(child, depth + (isNestingNode(n) ? 1 : 0));\n }\n }\n \n traverse(node, 0);\n return { cyclomatic, cognitive };\n}\n```\n\n## Schema Changes\n- Add `complexity.cyclomatic: number`\n- Add `complexity.cognitive: number`\n\n## Estimated Cost\n- ~0.1-0.2ms per chunk (reuses existing AST)\n- Essentially free","status":"open","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T02:00:49.222498+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T02:00:49.222498+03:00","labels":["metrics"]}
{"id":"tea-rags-mcp-o02","title":"Выставить дефолты переменных по рекомендациям PERFORMANCE_TUNING для MacBook","description":"Проверить рекомендации из PERFORMANCE_TUNING для локального MacBook сетапа и обновить дефолтные значения в коде. Согласовать с пользователем конкретные значения.","status":"closed","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T18:21:35.804993+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T19:20:33.442529+03:00","closed_at":"2026-02-04T19:20:33.442529+03:00","close_reason":"Closed","labels":["performance"]}
{"id":"tea-rags-mcp-p3a","title":"Fix duplicate pipeline log files per session","description":"Multiple pipeline log files created per single reindex session.\n\n## Symptom\n3 log files created within 1 second:\n- pipeline-...-39-403Z.log (992 bytes, header only)\n- pipeline-...-39-588Z.log (992 bytes, header only)\n- pipeline-...-40-474Z.log (27KB, actual data)\n\n## Root Cause\nDebugLogger is instantiated at module scope:\n export const pipelineLog = new DebugLogger();\nConstructor creates log file immediately if DEBUG=1.\n\nModule is imported from 3 places:\n- src/code/indexer.ts\n- src/code/pipeline/chunk-pipeline.ts\n- src/code/pipeline/index.ts (re-export)\n\nIf ESM resolves these as separate module instances (different import paths or specifier quirks), each creates its own log file. Only one instance actually receives pipeline events.\n\n## Fix Options\n1. Lazy init: create log file on first write, not in constructor\n2. Singleton with deduplication: check if log file already exists for current session\n3. Merge outputs at shutdown: collect all log paths and concatenate at session end\n\nOption 1 (lazy init) is the cleanest — no file created until first actual log entry.","status":"open","priority":3,"issue_type":"bug","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T21:12:34.674803+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T21:12:34.674803+03:00","labels":["bugfix"]}
{"id":"tea-rags-mcp-pft","title":"Add parallel processing workers","description":"## Goal\nUtilize multi-core for CPU-bound operations.\n\n## Worker Pools\n```typescript\nimport { Worker } from 'worker_threads';\n\n// CPU-bound: parsing, complexity\nconst cpuPool = new WorkerPool({\n size: os.cpus().length,\n worker: './workers/parse-worker.js'\n});\n\n// I/O-bound: git, network\nconst ioPool = new WorkerPool({\n size: 4,\n worker: './workers/io-worker.js'\n});\n```\n\n## Pipeline\n```typescript\nconst results = await pipeline(files)\n .parallel(cpuPool, parseAndAnalyze) // parallel parsing\n .batch(100) // batch for API\n .sequential(generateEmbeddings) // batched API (already optimized)\n .parallel(ioPool, upsertBatch); // parallel Qdrant writes\n```\n\n## Expected Speedup\n- 2-4x on typical 4-8 core machines\n- Linear scaling with core count for parse phase","status":"open","priority":3,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T02:01:03.383632+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T02:01:03.383632+03:00","labels":["scaling"],"dependencies":[{"issue_id":"tea-rags-mcp-pft","depends_on_id":"tea-rags-mcp-h4v","type":"blocks","created_at":"2026-02-03T02:03:38.491152+03:00","created_by":"Artur Korochanskii"}]}
{"id":"tea-rags-mcp-sd4","title":"Roadmap: Advanced Metrics \u0026 Indexing Optimizations","description":"## Overview\n\nAdd production-grade code intelligence metrics with optimized indexing pipeline.\n\n## New Metrics\n\n### 1. Real Churn (fileChurnCount)\n- Current: `commitCount` from blame (only shows last commit per line)\n- Target: `git log` based true change frequency\n- Granularity: File-level (inherited by chunks)\n\n### 2. Dependency Graph (importedBy)\n- Current: `imports` (outgoing dependencies)\n- Target: `importedBy` (incoming dependencies = blast radius)\n- Enables: True Change Risk Score = Churn × Blast Radius\n\n### 3. Complexity Metrics\n- Cyclomatic complexity (branch count)\n- Cognitive complexity (nesting + branches)\n- Calculated from existing tree-sitter AST (near-zero cost)\n\n## Optimization Strategies\n\n### Phase 1: Bulk Git Operations\n- Single `git log --all --numstat` instead of per-file calls\n- Parse once → build in-memory maps\n- Expected: 10-50x speedup for git operations\n\n### Phase 2: Two-Phase Indexing\n- Fast phase: parse, chunk, embed, basic metadata → searchable in 30s\n- Background phase: git history, dependency graph, complexity\n- Non-blocking search while enrichment runs\n\n### Phase 3: Dependency Graph Single Pass\n- Collect imports during chunking (already done)\n- Invert to importedBy after all files processed\n- O(n) in-memory, zero extra I/O\n\n### Phase 4: Git Data Caching\n- Cache keyed by HEAD commit hash\n- Incremental updates for changed files only\n- Expected: 10-100x faster reindex\n\n### Phase 5: Parallel Workers\n- CPU pool for parsing/complexity\n- I/O pool for git/qdrant\n- Expected: 2-4x on multi-core\n\n### Phase 6: History Sampling\n- Option to limit git log depth (last 100 commits or 1 year)\n- Predictable time, ~90% accuracy\n\n### Phase 7: Lazy Enrichment\n- On-demand git data fetch when rerank preset needs it\n- Don't pay for unused metrics\n\n## Expected Results\n\n| Metric | Current | Target |\n|--------|---------|--------|\n| Time to searchable | 2 min | 30 sec |\n| Full enrichment | N/A | 3-5 min (bg) |\n| Reindex (no changes) | 2 min | 5 sec |\n| Reindex (10 files) | 2 min | 15 sec |\n\n## Rerank Presets Enabled\n\n```json\n{\"rerank\": {\"custom\": {\"churn\": 0.3, \"complexity\": 0.3, \"importedBy\": 0.4}}}\n```\n\nTrue Change Risk Score = High Churn × High Complexity × High Blast Radius","status":"open","priority":2,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T02:00:31.606787+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T02:00:31.606787+03:00","labels":["metrics"]}
{"id":"tea-rags-mcp-sst","title":"Добавить QDRANT_UPSERT_BATCH_SIZE и BATCH_FORMATION_TIMEOUT_MS в бенчмарк npm run tune","description":"В бенчмарке tune нужно: 1. Измерять влияние QDRANT_UPSERT_BATCH_SIZE на производительность. 2. Измерять влияние BATCH_FORMATION_TIMEOUT_MS. 3. Выводить рекомендации по оптимальным значениям.","status":"closed","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T18:21:33.673484+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T20:45:15.043507+03:00","closed_at":"2026-02-04T20:45:15.043507+03:00","close_reason":"Stage profiling and batch formation timeout benchmark implemented and committed","labels":["performance"]}
{"id":"tea-rags-mcp-tnm","title":"Add fileChurnCount from git log","description":"## Goal\nReplace inaccurate blame-based commitCount with true churn from git log.\n\n## Implementation\n```typescript\n// Single bulk call\nconst log = execSync('git log --all --numstat --format=\"%H|%an|%ae|%at\"');\n\n// Parse into map\nconst fileChurn: Map\u003cstring, number\u003e = parseGitLog(log);\n\n// Add to chunk metadata\nmetadata.fileChurnCount = fileChurn.get(relativePath) ?? 0;\n```\n\n## Schema Changes\n- Add `fileChurnCount: number` to payload\n- Keep `commitCount` for backwards compatibility (deprecate later)\n\n## Estimated Cost\n- +100ms per 1000 files\n- File-level granularity (chunks inherit)","status":"open","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-03T02:00:39.99334+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T02:00:39.99334+03:00","labels":["performance"]}
{"id":"tea-rags-mcp-v60","title":"DX: Validate MCP schema on changes","description":"Problem: Claude doesn't automatically validate/update MCP tools schema.\n\nSolution:\n1. PostToolUse hook for schema validation\n2. Rules for schema standards\n\nFiles to create:\n- .claude/hooks/validate-mcp-schema.sh\n- .claude/rules/schemas.md\n- Update .claude/settings.json","status":"closed","priority":2,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T23:57:18.61852+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:03:15.301028+03:00","closed_at":"2026-02-03T00:03:15.301028+03:00","close_reason":"Created .claude/hooks/validate-mcp-schema.sh (type check on schema changes), .claude/rules/schemas.md (Zod schema standards), and updated settings.json","labels":["dx"]}
{"id":"tea-rags-mcp-w7i","title":"DX: Add MCP structure rules for src/tools","description":"Problem: Claude doesn't add code to src/tools when modifying tool logic.\n\nSolution: Create .claude/rules/mcp-structure.md with:\n- File organization rules (schemas.ts, handlers, tests)\n- Checklist for tool changes\n- Path-specific instructions for src/tools/**\n\nFiles to create:\n- .claude/rules/mcp-structure.md","status":"closed","priority":1,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T23:57:11.855781+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:00:10.683358+03:00","closed_at":"2026-02-03T00:00:10.683358+03:00","close_reason":"Created .claude/rules/mcp-structure.md with file organization, schema standards, handler patterns, and checklist","labels":["dx"]}
{"id":"tea-rags-mcp-wf4","title":"Add tool to generate search strategy config for CLAUDE.md","description":"Add a tool that helps users configure optimal search strategy for their MCP setup.\n\n## Problem\nUsers want to configure search workflow (semantic → structure → exact text) but:\n- Don't know which MCP servers they have available\n- Don't know best practices for combining tools\n- Can't copy working configs from other projects\n\n## Solution: `generate_search_strategy` tool\n\n### Input parameters (all optional)\n```typescript\n{\n // MCP servers available (if known)\n servers?: {\n semantic?: string; // e.g., \"tea-rags\", \"qdrant-mcp\"\n ast?: string; // e.g., \"tree-sitter\" \n grep?: string; // e.g., \"ripgrep\", \"grep\"\n filesystem?: string; // e.g., \"filesystem\"\n };\n \n // Project context (optional)\n projectType?: \"rails\" | \"node\" | \"python\" | \"go\" | \"generic\";\n projectPath?: string;\n \n // Output format\n format?: \"full\" | \"minimal\" | \"template\";\n}\n```\n\n### Output\nReturns markdown text ready to paste into CLAUDE.md:\n\n1. **If servers specified** → Ready-to-use config with actual tool names\n2. **If servers unknown** → Template with `{{SEMANTIC_TOOL}}` placeholders\n3. **If projectPath specified** → Adds path patterns for that project structure\n\n### Content structure\n```markdown\n## SEARCH_STRATEGY\n\n### Decision matrix\n| Situation | Tool | Why |\n|-----------|------|-----|\n| \"Find code related to X\" | {{SEMANTIC_TOOL}} | Intent-based |\n| \"What methods does class have\" | {{AST_TOOL}} | Structure |\n| \"Verify exact string exists\" | {{GREP_TOOL}} | Literal match |\n\n### Mandatory workflow\n1. {{SEMANTIC_TOOL}} — discovery, candidates\n2. {{AST_TOOL}} — structure analysis (optional)\n3. {{GREP_TOOL}} — verification\n4. Read files — confirm\n\n### Anti-patterns\n- ❌ Using grep as first step for understanding code\n- ❌ Skipping semantic search because \"I know the file\"\n- ❌ Trusting similarity scores as proof\n```\n\n### Format options\n\n**full** (default): Complete config with explanations, examples, anti-patterns\n**minimal**: Just decision matrix and workflow\n**template**: Placeholders only, user fills in tool names\n\n## Implementation notes\n\n1. Tool returns text, doesn't write files (user decides where to put it)\n2. Can detect available tools by checking MCP context (if possible)\n3. Include tea-rags specific sections when tea-rags is the semantic tool\n4. Generic enough to work with other semantic search MCPs\n\n## Related\n- tea-rags-mcp-kql: MCP instructions for agent best practices\n- These could share the same content templates","status":"open","priority":2,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-05T01:57:15.156968+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-05T01:57:15.156968+03:00","labels":["dx"],"dependencies":[{"issue_id":"tea-rags-mcp-wf4","depends_on_id":"tea-rags-mcp-kql","type":"blocks","created_at":"2026-02-05T01:57:19.683658+03:00","created_by":"Artur Korochanskii"}]}
{"id":"tea-rags-mcp-wr0","title":"Document semantic_search MCP tool filter capabilities","description":"Update semantic_search tool description to explain:\n\n1. The filter parameter accepts standard Qdrant filter format (must/should/must_not with match/range conditions)\n2. MUST list available metadata fields with their types:\n - For generic documents (via add_documents): user-defined metadata fields\n - For code chunks (via index_codebase):\n * relativePath (string)\n * fileExtension (string)\n * language (string)\n * startLine (number)\n * endLine (number)\n * chunkIndex (number)\n * isDocumentation (boolean)\n * name (string, optional)\n * chunkType (string: function|class|interface|block, optional)\n * parentName (string, optional)\n * parentType (string, optional)\n * git.dominantAuthor (string)\n * git.dominantAuthorEmail (string)\n * git.authors (string[])\n * git.lastModifiedAt (number, unix timestamp)\n * git.firstCreatedAt (number, unix timestamp)\n * git.commitCount (number)\n * git.ageDays (number)\n * git.lastCommitHash (string)\n * git.taskIds (string[])\n\nDo NOT include filter examples or external links.\n\nFiles: src/tools/search.ts, src/tools/schemas.ts","status":"closed","priority":3,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T20:25:59.640683+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:15:03.998349+03:00","closed_at":"2026-02-03T00:15:03.998349+03:00","close_reason":"Closed","labels":["docs"]}
{"id":"tea-rags-mcp-wtw","title":"DX: Enforce TDD with skill and Stop hook","description":"Problem: Claude doesn't work by TDD, doesn't cover features with tests.\n\nSolution:\n1. TDD skill with workflow instructions\n2. Stop hook that blocks completion if tests fail\n\nFiles to create:\n- .claude/skills/tdd/SKILL.md\n- Update .claude/settings.json with Stop hook for npm test","status":"closed","priority":1,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T23:57:15.453009+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:01:58.637348+03:00","closed_at":"2026-02-03T00:01:58.637348+03:00","close_reason":"Created .claude/skills/tdd/SKILL.md and added Stop hook to run tests before completion","labels":["dx"]}
{"id":"tea-rags-mcp-wya","title":"DEBUG=1: добавить профилирование этапов индексирования с итоговым отчётом","description":"При DEBUG=1 считать суммарное время каждого этапа индексирования/реиндексирования и выводить в конце итоговый отчёт с процентами. Этапы: обход файлов, AST-парсинг, сбор git-метаданных, эмбеддинг чанков, запись в Qdrant. Формат вывода — таблица или список с абсолютным временем и % от общего.","status":"closed","priority":2,"issue_type":"feature","owner":"akorochanskij@taxdome.com","created_at":"2026-02-04T18:00:47.893809+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-04T20:45:15.034797+03:00","closed_at":"2026-02-04T20:45:15.034797+03:00","close_reason":"Stage profiling and batch formation timeout benchmark implemented and committed","labels":["performance"]}
{"id":"tea-rags-mcp-y1l","title":"Update README: Jina embeddings performance/quality ratio","description":"In README.md 'Why Jina Embeddings' section, add mention that Jina embeddings provide the best performance/quality ratio for local setups.\n\nKey points to add:\n- Best balance between speed and quality for local/on-premise deployments\n- Optimized for running on CPU (relevant for Ollama users)\n- Good quality without requiring expensive GPU infrastructure","status":"closed","priority":3,"issue_type":"task","owner":"akorochanskij@taxdome.com","created_at":"2026-02-02T23:53:11.848624+03:00","created_by":"Artur Korochanskii","updated_at":"2026-02-03T00:15:03.999806+03:00","closed_at":"2026-02-03T00:15:03.999806+03:00","close_reason":"Closed","labels":["docs"]}