local-only server
The server can only run on the client’s local machine because it depends on local resources.
Integrations
Uses Docker volumes for persistent model caching and deployment of the vector search service
Monitors Git-managed projects for file changes and provides real-time search updates as code evolves
Supports installation and deployment from GitHub repositories, with direct integration for source code access
Files-DB-MCP: Vector Search for Code Projects
A local vector database system that provides LLM coding agents with fast, efficient search capabilities for software projects via the Message Control Protocol (MCP).
Features
- Zero Configuration - Auto-detects project structure with sensible defaults
- Real-Time Monitoring - Continuously watches for file changes
- Vector Search - Semantic search for finding relevant code
- MCP Interface - Compatible with Claude Code and other LLM tools
- Open Source Models - Uses Hugging Face models for code embeddings
Installation
Option 1: Clone and Setup (Recommended)
Option 2: Automated Installation Script
Usage
After installation, run in any project directory:
The service will:
- Detect your project files
- Start indexing in the background
- Begin responding to MCP search queries immediately
Requirements
- Docker
- Docker Compose
Configuration
Files-DB-MCP works without configuration, but you can customize it with environment variables:
EMBEDDING_MODEL
- Change the embedding model (default: 'jinaai/jina-embeddings-v2-base-code' or project-specific model)FAST_STARTUP
- Set to 'true' to use a smaller model for faster startup (default: 'false')QUANTIZATION
- Enable/disable quantization (default: 'true')BINARY_EMBEDDINGS
- Enable/disable binary embeddings (default: 'false')IGNORE_PATTERNS
- Comma-separated list of files/dirs to ignore
First-Time Startup
On first run, Files-DB-MCP will download embedding models which may take several minutes depending on:
- The size of the selected model (300-500MB for high-quality models)
- Your internet connection speed
Subsequent startups will be much faster as models are cached in a persistent Docker volume. For faster initial startup, you can:
Model Caching
Files-DB-MCP automatically persists downloaded embedding models, so you only need to download them once:
- Models are stored in a Docker volume called
model_cache
- This volume persists between container restarts and across different projects
- The cache is shared for all projects using Files-DB-MCP on your machine
- You don't need to download the model again for each project
Claude Code Integration
Add to your Claude Code configuration:
For details, see Claude MCP Integration.
Documentation
- Installation Guide - Detailed setup instructions
- API Reference - Complete API documentation
- Configuration Guide - Configuration options
Repository Structure
/src
- Source code/tests
- Unit and integration tests/docs
- Documentation/scripts
- Utility scripts/install
- Installation scripts/.docker
- Docker configuration/config
- Configuration files/ai-assist
- AI assistance files
License
Contributing
Contributions welcome! Please feel free to submit a pull request.
This server cannot be installed
A local vector database system that provides LLM coding agents with fast, efficient semantic search capabilities for software projects via the Message Control Protocol.