Claude Context without the cloud. Semantic code search that runs 100% locally using EmbeddingGemma. No API keys, no costs, your code never leaves your machine.
🔍 Find code by meaning, not strings
🔒 100% local - completely private
💰 Zero API costs - forever free
⚡ Fewer tokens in Claude Code and fast local searches
An intelligent code search system that uses Google's EmbeddingGemma model and advanced multi-language chunking to provide semantic search capabilities across 15 file extensions and 9+ programming languages, integrated with Claude Code via MCP (Model Context Protocol).
🚧 Beta Release
Core functionality working
Installation tested on Mac/Linux
Benchmarks coming soon
Please report issues!
Demo
Features
Multi-language support: 9+ programming languages with 15 file extensions
Intelligent chunking: AST-based (Python) + tree-sitter (JS/TS/Go/Java/Rust/C/C++/C#)
Semantic search: Natural language queries to find code across all languages
Rich metadata: File paths, folder structure, semantic tags, language-specific info
MCP integration: Direct integration with Claude Code
Local processing: All embeddings stored locally, no API calls
Fast search: FAISS for efficient similarity search
Why this
Claude’s code context is powerful, but sending your code to the cloud costs tokens and raises privacy concerns. This project keeps semantic code search entirely on your machine. It integrates with Claude Code via MCP, so you keep the same workflow—just faster, cheaper, and private.
Requirements
Python 3.12+
Disk: 1–2 GB free (model + caches + index)
Optional: NVIDIA GPU (CUDA 11/12) for FAISS acceleration; Apple Silicon (MPS) for embedding acceleration. These also speed up running the embedding model with SentenceTransformer, but everything still works on CPU.
Install & Update
Install (one‑liner)
If your system doesn't have curl
, you can use wget
:
Update existing installation
Run the same install command to update:
The installer will:
Detect your existing installation
Preserve your embeddings and indexed projects in
~/.claude_code_search
Stash any local changes automatically (if running via curl)
Update the code and dependencies
What the installer does
Installs
uv
if missing and creates a project venvClones/updates
claude-context-local
in~/.local/share/claude-context-local
Installs Python dependencies with
uv sync
Downloads the EmbeddingGemma model (~1.2–1.3 GB) if not already cached
Tries to install
faiss-gpu
if an NVIDIA GPU is detected (interactive mode only)Preserves all your indexed projects and embeddings across updates
Quick Start
1) Register the MCP server (stdio)
Then open Claude Code; the server will run in stdio mode inside the uv
environment.
2) Index your codebase
Open Claude Code and say: index this codebase. No manual commands needed.
3) Use in Claude Code
Interact via chat inside Claude Code; no function calls or commands are required.
Architecture
Data flow
Intelligent Chunking
The system uses advanced parsing to create semantically meaningful chunks across all supported languages:
Chunking Strategies
Python: AST-based parsing for rich metadata extraction
All other languages: Tree-sitter parsing with language-specific node type recognition
Chunk Types Extracted
Functions/Methods: Complete with signatures, docstrings, decorators
Classes/Structs: Full definitions with member functions as separate chunks
Interfaces/Traits: Type definitions and contracts
Enums/Constants: Value definitions and module-level declarations
Namespaces/Modules: Organizational structures
Templates/Generics: Parameterized type definitions
Rich Metadata for All Languages
File path and folder structure
Function/class/type names and relationships
Language-specific features (async, generics, modifiers, etc.)
Parent-child relationships (methods within classes)
Line numbers for precise code location
Semantic tags (component, export, async, etc.)
Configuration
Environment Variables
CODE_SEARCH_STORAGE
: Custom storage directory (default:~/.claude_code_search
)
Model Configuration
The system uses google/embeddinggemma-300m
by default.
Notes:
Download size: ~1.2–2 GB on disk depending on variant and caches
Device selection: auto (CUDA on NVIDIA, MPS on Apple Silicon, else CPU)
You can pre-download via installer or at first use
FAISS backend: CPU by default. If an NVIDIA GPU is detected, the installer attempts to install
faiss-gpu-cu12
(orfaiss-gpu-cu11
) and the index will run on GPU automatically at runtime while saving as CPU for portability.
Hugging Face authentication (if prompted)
The google/embeddinggemma-300m
model is hosted on Hugging Face and may require
accepting terms and/or authentication to download.
Visit the model page and accept any terms:
Authenticate one of the following ways:
CLI (recommended):
uv run huggingface-cli login # Paste your token from https://huggingface.co/settings/tokensEnvironment variable:
export HUGGING_FACE_HUB_TOKEN=hf_XXXXXXXXXXXXXXXXXXXXXXXX
After the first successful download, we cache the model under ~/.claude_code_search/models
and prefer offline loads for speed and reliability.
Supported Languages & Extensions
Fully Supported (15 extensions across 9+ languages):
Language | Extensions |
Python |
|
JavaScript |
,
|
TypeScript |
,
|
Java |
|
Go |
|
Rust |
|
C |
|
C++ |
,
,
,
|
C# |
|
Svelte |
|
Total: 15 file extensions across 9+ programming languages
Storage
Data is stored in the configured storage directory:
Performance
Model size: ~1.2GB (EmbeddingGemma-300m and caches)
Embedding dimension: 768 (can be reduced for speed)
Index types: Flat (exact) or IVF (approximate) based on dataset size
Batch processing: Configurable batch sizes for embedding generation
Tips:
First index on a large repo will take time (model load + chunk + embed). Subsequent runs are incremental.
With GPU FAISS, searches on large indexes are significantly faster.
Embeddings automatically use CUDA (NVIDIA) or MPS (Apple) if available.
Troubleshooting
Common Issues
Import errors: Ensure all dependencies are installed with
uv sync
Model download fails: Check internet connection and disk space
Memory issues: Reduce batch size in indexing script
No search results: Verify the codebase was indexed successfully
FAISS GPU not used: Ensure
nvidia-smi
is available and CUDA drivers are installed; re-run installer to pickfaiss-gpu-cu12
/cu11
.Force offline: We auto-detect a local cache and prefer offline loads; you can also set
HF_HUB_OFFLINE=1
.
Ignored directories (for speed and noise reduction)
node_modules
, .venv
, venv
, env
, .env
, .direnv
, __pycache__
, .pytest_cache
, .mypy_cache
, .ruff_cache
, .pytype
, .ipynb_checkpoints
, build
, dist
, out
, public
, .next
, .nuxt
, .svelte-kit
, .angular
, .astro
, .vite
, .cache
, .parcel-cache
, .turbo
, coverage
, .coverage
, .nyc_output
, .gradle
, .idea
, .vscode
, .docusaurus
, .vercel
, .serverless
, .terraform
, .mvn
, .tox
, target
, bin
, obj
Contributing
This is a research project focused on intelligent code chunking and search. Feel free to experiment with:
Different chunking strategies
Alternative embedding models
Enhanced metadata extraction
Performance optimizations
License
Licensed under the GNU General Public License v3.0 (GPL-3.0). See the LICENSE
file for details.
Inspiration
This project draws inspiration from zilliztech/claude-context. I adapted the concepts to a Python implementation with fully local embeddings.
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Provides semantic code search capabilities that run 100% locally using EmbeddingGemma embeddings. Enables finding code by meaning across 15 file extensions and 9+ programming languages without API costs or sending code to the cloud.