Skip to main content
Glama

CodeGraph CLI MCP Server

by Jakedismo
PERFORMANCE_ANALYSIS.md14.9 kB
# CodeGraph Performance Analysis & Optimization Opportunities ## Executive Summary CodeGraph has good foundations but several critical performance bottlenecks: 1. **CRITICAL**: FAISS indexes loaded from disk on EVERY search (100-500ms overhead) 2. **CRITICAL**: No embedding generator caching (recreated each search) 3. **HIGH**: Brute-force Flat index (not scalable beyond 100K vectors) 4. **MEDIUM**: Sequential shard searching (could be parallel) 5. **MEDIUM**: No query result caching ## FAISS Index Analysis ### Current Implementation ❌ **Index Type:** `FlatIndex` with Inner Product metric - ✅ **Pros**: 100% accurate, simple, no training required - ❌ **Cons**: O(n) search complexity, not scalable, slow for large datasets **Index Loading:** Disk I/O on every search ```rust // server.rs line 321 - CRITICAL ISSUE let mut index = read_index(index_path.to_string_lossy())?; // LOADS FROM DISK EVERY TIME! ``` **Performance Impact:** - **Small codebase** (1K vectors): 10-50ms per index load → 50-250ms total (5-10 shards) - **Medium codebase** (10K vectors): 50-200ms per index load → 250-1000ms total - **Large codebase** (100K+ vectors): 200-500ms per index load → 1-5 seconds total ### Optimal Implementation ✅ **Index Caching:** ```rust // Use lazy_static or OnceCell to cache loaded indexes static INDEX_CACHE: Lazy<DashMap<String, Arc<FlatIndex>>> = Lazy::new(|| DashMap::new()); pub async fn bin_search_with_scores_cached(...) { let index = INDEX_CACHE.entry(index_path.to_string()) .or_insert_with(|| { Arc::new(read_index(index_path).unwrap()) }) .clone(); // Use cached index - NO disk I/O! } ``` **Performance Gain:** 10-50x faster (5ms vs 250ms for medium codebase) **Index Type Upgrade:** - **Current**: `FlatIndex` (brute-force O(n)) - **Better**: `IndexIVFFlat` (clustered search O(sqrt(n))) - **Best**: `IndexHNSWFlat` (graph-based O(log(n))) **For typical codebases:** | Codebase Size | FlatIndex | IVFFlat | HNSWFlat | |---------------|-----------|---------|----------| | 1K vectors | 5ms | 2ms | 1ms | | 10K vectors | 50ms | 8ms | 3ms | | 100K vectors | 500ms | 25ms | 5ms | | 1M vectors | 5000ms | 80ms | 10ms | **Recommendation:** - Keep `FlatIndex` for small codebases (<10K vectors) - simplest - Add index caching (critical!) - Consider `IndexIVFFlat` for codebases >50K vectors ## Embedding Generator Analysis ### Current Implementation ❌ **Problem:** New generator created for EACH search ```rust // server.rs line 302-303 - CRITICAL ISSUE let embedding_gen = codegraph_vector::EmbeddingGenerator::with_auto_from_env().await; let e = embedding_gen.generate_text_embedding(&query).await?; ``` **Impact:** - **LM Studio**: 50-200ms to initialize connection - **Ollama**: 20-100ms to initialize - **ONNX**: 500-2000ms to load model into memory! **For 10 searches:** 5-20 seconds wasted on initialization! ### Optimal Implementation ✅ **Cache the generator:** ```rust static EMBEDDING_GENERATOR: OnceCell<Arc<EmbeddingGenerator>> = OnceCell::new(); pub async fn get_embedding_generator() -> Arc<EmbeddingGenerator> { EMBEDDING_GENERATOR.get_or_init(|| async { Arc::new(EmbeddingGenerator::with_auto_from_env().await) }).await.clone() } ``` **Performance Gain:** 10-100x faster per search (5ms vs 500ms for ONNX) ## Search Performance Breakdown ### Current Flow (Total: 300-600ms) 1. **Create embedding generator**: 50-500ms ❌ 2. **Load FAISS indexes from disk**: 100-500ms ❌ 3. **Generate query embedding**: 10-50ms ✅ 4. **Search indexes**: 5-50ms ✅ 5. **Load nodes from RocksDB**: 10-30ms ✅ 6. **Format results**: 5-10ms ✅ ### Optimized Flow (Total: 30-140ms) 1. **Get cached embedding generator**: 0.1ms ✅ 2. **Get cached FAISS indexes**: 1-5ms ✅ 3. **Generate query embedding**: 10-50ms ✅ 4. **Search indexes (parallel)**: 5-30ms ✅ 5. **Load nodes from RocksDB**: 10-30ms ✅ 6. **Format results**: 5-10ms ✅ **Total Speedup:** 5-10x faster (300-600ms → 30-140ms) ## CLI UX Analysis ### Current State ✅ **Good:** - ✅ Progress bars with indicatif (clean TUI) - ✅ Color coding with colored crate - ✅ Performance metrics display - ✅ Configurable workers and batch sizes - ✅ RUST_LOG=warn for clean output **Missing:** - ❌ No search result caching - ❌ No query history - ❌ No autocomplete/suggestions - ❌ No result pagination (all results dumped at once) - ❌ No interactive mode - ❌ No timing breakdown (where is time spent?) ### Recommended Improvements #### 1. Add Timing Breakdown (Priority: HIGH) ```bash codegraph search "authentication" --timing Results (5 matches): ┌─────────────────────────────────────┐ │ Performance Breakdown: │ │ • Embedding generation: 15ms │ │ • Index search: 8ms │ │ • Node loading: 22ms │ │ • Formatting: 3ms │ │ Total: 48ms │ └─────────────────────────────────────┘ ``` #### 2. Result Caching (Priority: HIGH) ```bash codegraph search "authentication" # First call: 300ms codegraph search "authentication" # Cached: 5ms ⚡ 🎯 Cache hit! (Saved 295ms) ``` #### 3. Interactive Mode (Priority: MEDIUM) ```bash codegraph shell codegraph> search authentication ... results ... codegraph> neighbors <uuid> ... dependencies ... codegraph> exit ``` #### 4. Better Result Display (Priority: MEDIUM) ```bash # Current: Dumps all results as JSON # Better: Formatted table with highlights codegraph search "auth" --limit 5 ╔═══════════════════════════════════════════════════════════════╗ ║ Search: "auth" (5 results in 48ms) ║ ╠═══════════════════════════════════════════════════════════════╣ ║ 1. authenticate_user [similarity: 0.92] ║ ║ src/auth/service.rs:45 ║ ║ Validates user credentials against database ║ ╟───────────────────────────────────────────────────────────────╢ ║ 2. verify_token [similarity: 0.87] ║ ║ src/auth/middleware.rs:23 ║ ║ JWT token verification middleware ║ ╚═══════════════════════════════════════════════════════════════╝ Use --json for JSON output ``` ## Detailed Optimization Recommendations ### Priority 0: Critical (Do Immediately) #### 1. Cache FAISS Indexes **Impact:** 10-50x speedup on repeated searches **Effort:** 2 hours **Files:** `crates/codegraph-mcp/src/server.rs` ```rust use dashmap::DashMap; use once_cell::sync::Lazy; static INDEX_CACHE: Lazy<DashMap<PathBuf, Arc<Box<dyn Index>>>> = Lazy::new(|| DashMap::new()); fn get_cached_index(path: &Path) -> anyhow::Result<Arc<Box<dyn Index>>> { if let Some(cached) = INDEX_CACHE.get(path) { return Ok(cached.clone()); } let index = read_index(path.to_string_lossy())?; let arc_index = Arc::new(index); INDEX_CACHE.insert(path.to_path_buf(), arc_index.clone()); Ok(arc_index) } ``` #### 2. Cache Embedding Generator **Impact:** 10-100x speedup on initialization **Effort:** 1 hour **Files:** `crates/codegraph-mcp/src/server.rs` ```rust static EMBEDDING_GEN: OnceCell<Arc<EmbeddingGenerator>> = OnceCell::new(); async fn get_embedding_generator() -> Arc<EmbeddingGenerator> { EMBEDDING_GEN.get_or_init(|| async { Arc::new(EmbeddingGenerator::with_auto_from_env().await) }).await.clone() } ``` ### Priority 1: High (Do Soon) #### 3. Parallel Shard Searching **Impact:** 2-3x speedup when searching multiple shards **Effort:** 3 hours **Files:** `crates/codegraph-mcp/src/server.rs` ```rust use tokio::task; let search_tasks: Vec<_> = shard_paths .iter() .map(|(idx_path, ids_path)| { task::spawn(async move { search_cached_index(idx_path, ids_path, &emb, limit).await }) }) .collect(); let results = futures::future::join_all(search_tasks).await; ``` #### 4. Add Query Result Caching **Impact:** 100x speedup on repeated queries **Effort:** 2 hours **Files:** `crates/codegraph-mcp/src/server.rs` ```rust use lru::LruCache; static QUERY_CACHE: Lazy<Mutex<LruCache<String, Value>>> = Lazy::new(|| Mutex::new(LruCache::new(100))); pub async fn search_with_cache(query: String) -> Result<Value> { let cache_key = format!("{}:{}:{}", query, paths, langs); if let Some(cached) = QUERY_CACHE.lock().await.get(&cache_key) { return Ok(cached.clone()); } let result = bin_search_with_scores(query, paths, langs, limit).await?; QUERY_CACHE.lock().await.put(cache_key, result.clone()); Ok(result) } ``` #### 5. Add Performance Timing **Impact:** Better visibility into bottlenecks **Effort:** 2 hours **Files:** `crates/codegraph-mcp/src/bin/codegraph.rs` ```rust #[derive(Debug)] struct SearchTiming { embedding_ms: u64, index_search_ms: u64, node_loading_ms: u64, formatting_ms: u64, } // Display after search println!("Performance:"); println!(" Embedding: {}ms", timing.embedding_ms); println!(" Index search: {}ms", timing.index_search_ms); println!(" Node loading: {}ms", timing.node_loading_ms); println!(" Total: {}ms", timing.total()); ``` ### Priority 2: Medium (Nice to Have) #### 6. Upgrade to IVF Index for Large Codebases **Impact:** 10x speedup for 100K+ vectors **Effort:** 8 hours **Files:** `crates/codegraph-mcp/src/indexer.rs` ```rust // Automatically choose index type based on size let index = if vectors.len() < 10_000 { FlatIndex::new_ip(dimension) // Small: use flat } else if vectors.len() < 100_000 { // Medium: use IVF with 100 clusters let mut ivf = IndexIVFFlat::new_ip(dimension, 100)?; ivf.train(vectors)?; ivf } else { // Large: use IVF with sqrt(n) clusters let nlist = (vectors.len() as f64).sqrt() as usize; let mut ivf = IndexIVFFlat::new_ip(dimension, nlist)?; ivf.train(vectors)?; ivf } ``` #### 7. Interactive Shell Mode **Impact:** Better UX for exploratory analysis **Effort:** 6 hours **Files:** `crates/codegraph-mcp/src/bin/codegraph.rs` ```rust Commands::Shell => { start_interactive_shell().await?; } async fn start_interactive_shell() -> Result<()> { use rustyline::Editor; let mut rl = Editor::<()>::new()?; loop { match rl.readline("codegraph> ") { Ok(line) => { let args: Vec<_> = line.split_whitespace().collect(); match args[0] { "search" => { /* handle search */ }, "exit" => break, _ => println!("Unknown command"), } } Err(_) => break, } } Ok(()) } ``` #### 8. Better Result Formatting **Impact:** Easier to read results **Effort:** 4 hours **Files:** `crates/codegraph-mcp/src/bin/codegraph.rs` ```rust use prettytable::{Table, Row, Cell}; fn format_search_results(results: &[SearchResult]) -> String { let mut table = Table::new(); table.add_row(row!["#", "Name", "File", "Score"]); for (i, result) in results.iter().enumerate() { table.add_row(row![ i+1, result.name, result.file, format!("{:.2}", result.score) ]); } table.to_string() } ``` ## Performance Benchmarks (Expected) ### Search Performance (Medium Codebase: 10K vectors) | Optimization | Time | Speedup | |--------------|------|---------| | Current (no cache) | 450ms | 1x | | + Index caching | 50ms | 9x | | + Generator caching | 40ms | 11x | | + Parallel shards | 25ms | 18x | | + Result caching (hit) | 2ms | 225x | ### First Search (Cold Start) | Optimization | Time | Speedup | |--------------|------|---------| | Current | 450ms | 1x | | + Parallel shards | 280ms | 1.6x | | + IVF index | 180ms | 2.5x | ## Implementation Priority ### Phase 1: Critical Performance (Week 1) 1. ✅ Cache FAISS indexes (P0) 2. ✅ Cache embedding generator (P0) 3. ✅ Add performance timing (P1) **Expected Impact:** 10-20x speedup on repeated searches ### Phase 2: Parallel & Caching (Week 2) 4. ✅ Parallel shard searching (P1) 5. ✅ Query result caching (P1) **Expected Impact:** 3-5x additional speedup ### Phase 3: Advanced Features (Week 3-4) 6. ⏳ IVF index for large codebases (P2) 7. ⏳ Interactive shell mode (P2) 8. ⏳ Better result formatting (P2) **Expected Impact:** Better UX + 10x for large codebases ## Memory Considerations ### Index Caching Memory Usage **Flat Index:** ~4 bytes per vector dimension - 10K vectors × 1536 dim × 4 bytes = **60 MB** - 100K vectors × 1536 dim × 4 bytes = **600 MB** **With Sharding (5-10 shards):** - Total cached: **300MB - 6GB** **Recommendation:** - Add max cache size limit (default: 2GB) - Add LRU eviction policy - Add memory monitoring ```rust static INDEX_CACHE: Lazy<LruCache<PathBuf, Arc<Box<dyn Index>>>> = Lazy::new(|| LruCache::with_memory_limit(2_000_000_000)); // 2GB ``` ## Testing Plan ### Performance Tests ```bash # Benchmark current vs optimized cargo bench --bench search_performance # Test with different codebase sizes codegraph bench --size small # 1K vectors codegraph bench --size medium # 10K vectors codegraph bench --size large # 100K vectors # Test cache effectiveness codegraph search "auth" --repeat 10 --timing ``` ### Load Tests ```bash # Concurrent searches codegraph load-test --concurrent 10 --queries 100 # Memory usage under load codegraph load-test --monitor-memory ``` ## Conclusion **Current Performance:** Acceptable for small codebases, slow for repeated searches **Optimized Performance:** 10-20x faster with caching, scalable to 1M+ vectors **Critical Bottlenecks (Fix First):** 1. ❌ No FAISS index caching → **100-500ms per search** 2. ❌ No embedding generator caching → **50-500ms per search** **Quick Wins:** 1. Cache FAISS indexes → **10-50x speedup** (2 hours work) 2. Cache embedding generator → **10-100x speedup** (1 hour work) 3. Add performance timing → **Better visibility** (2 hours work) **Total Impact:** **10-20x faster searches with 5 hours of work**

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jakedismo/codegraph-rust'

If you have feedback or need assistance with the MCP directory API, please join our Discord server