A privacy-first local document search server that enables semantic search through your documents without sending data to external services. All operations run entirely on your machine using local embedding models and a LanceDB vector database.
Core Capabilities:
Semantic Document Search (
query_documents) - Search using natural language queries that understand meaning rather than keywords. Returns 1-20 relevant passages with similarity scores.Document Ingestion (
ingest_file) - Process and index PDF, DOCX, TXT, and Markdown files through text extraction, intelligent chunking with overlap, and embedding generation. Automatically updates documents upon re-ingestion.File Management (
list_files) - View all indexed documents with file paths and chunk counts. Permanently delete specific files and their associated data.System Status (
status) - Monitor server health including total documents, chunks, database size, memory usage, and configuration.
Key Features:
Complete Privacy: No data leaves your machine after initial model download; strict path restriction to configured BASE_DIR
Offline Operation: Works without internet once the embedding model is cached
Fast Performance: Query responses typically under 3 seconds even with thousands of chunks
Zero Cost: No API fees or subscriptions
No Complex Setup: Runs via npx with no installation required
Provides specialized support for ingesting and indexing Markdown documents, preserving the integrity of code blocks and structural elements for improved semantic search and retrieval.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Local RAGfind the error handling section in our API docs"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Local RAG
Local RAG for developers using MCP. Semantic search with keyword boost for exact technical terms — fully private, zero setup.
Features
Semantic search with keyword boost Vector search first, then keyword matching boosts exact matches. Terms like
useEffect, error codes, and class names rank higher—not just semantically guessed.Smart semantic chunking Chunks documents by meaning, not character count. Uses embedding similarity to find natural topic boundaries—keeping related content together and splitting where topics change.
Quality-first result filtering Groups results by relevance gaps instead of arbitrary top-K cutoffs. Get fewer but more trustworthy chunks.
Runs entirely locally No API keys, no cloud, no data leaving your machine. Works fully offline after the first model download.
Zero-friction setup One
npxcommand. No Docker, no Python, no servers to manage. Designed for Cursor, Codex, and Claude Code via MCP.
Quick Start
Set BASE_DIR to the folder you want to search. Documents must live under it.
Add the MCP server to your AI coding tool:
For Cursor — Add to ~/.cursor/mcp.json:
For Codex — Add to ~/.codex/config.toml:
For Claude Code — Run this command:
Restart your tool, then start using it:
That's it. No installation, no Docker, no complex setup.
Why This Exists
You want AI to search your documents—technical specs, research papers, internal docs. But most solutions send your files to external APIs.
Privacy. Your documents might contain sensitive data. This runs entirely locally.
Cost. External embedding APIs charge per use. This is free after the initial model download.
Offline. Works without internet after setup.
Code search. Pure semantic search misses exact terms like useEffect or ERR_CONNECTION_REFUSED. Keyword boost catches both meaning and exact matches.
Usage
The server provides 6 MCP tools: ingest file, ingest data, search, list, delete, status
(ingest_file, ingest_data, query_documents, list_files, delete_file, status).
Ingesting Documents
Supports PDF, DOCX, TXT, and Markdown. The server extracts text, splits it into chunks, generates embeddings locally, and stores everything in a local vector database.
Re-ingesting the same file replaces the old version automatically.
Ingesting HTML Content
Use ingest_data to ingest HTML content retrieved by your AI assistant (via web fetch, curl, browser tools, etc.):
The server extracts main content using Readability (removes navigation, ads, etc.), converts to Markdown, and indexes it. Perfect for:
Web documentation
HTML retrieved by the AI assistant
Clipboard content
HTML is automatically cleaned—you get the article content, not the boilerplate.
Note: The RAG server itself doesn't fetch web content—your AI assistant retrieves it and passes the HTML to
ingest_data. This keeps the server fully local while letting you index any content your assistant can access. Please respect website terms of service and copyright when ingesting external content.
Searching Documents
Search uses semantic similarity with keyword boost. This means useEffect finds documents containing that exact term, not just semantically similar React concepts.
Results include text content, source file, and relevance score. Adjust result count with limit (1-20, default 10).
Managing Files
Search Tuning
Adjust these for your use case:
Variable | Default | Description |
|
| Keyword boost factor. 0 = semantic only, higher = stronger keyword boost. |
| (not set) |
|
| (not set) | Filter out low-relevance results (e.g., |
Code-focused tuning
For codebases and API specs, increase keyword boost so exact identifiers (useEffect, ERR_*, class names) dominate ranking:
0.7— balanced semantic + keyword1.0— aggressive; exact matches strongly rerank results
Keyword boost is applied after semantic filtering, so it improves precision without surfacing unrelated matches.
How It Works
TL;DR:
Documents are chunked by semantic similarity, not fixed character counts
Each chunk is embedded locally using Transformers.js
Search uses semantic similarity with keyword boost for exact matches
Results are filtered based on relevance gaps, not raw scores
Details
When you ingest a document, the parser extracts text based on file type (PDF via pdfjs-dist, DOCX via mammoth, text files directly).
The semantic chunker splits text into sentences, then groups them using embedding similarity. It finds natural topic boundaries where the meaning shifts—keeping related content together instead of cutting at arbitrary character limits. This produces chunks that are coherent units of meaning, typically 500-1000 characters. Markdown code blocks are kept intact—never split mid-block—preserving copy-pastable code in search results.
Each chunk goes through a Transformers.js embedding model (default: all-MiniLM-L6-v2, configurable via MODEL_NAME), converting text into vectors. Vectors are stored in LanceDB, a file-based vector database requiring no server process.
When you search:
Your query becomes a vector using the same model
Semantic (vector) search finds the most relevant chunks
Quality filters apply (distance threshold, grouping)
Keyword matches boost rankings for exact term matching
The keyword boost ensures exact terms like useEffect or error codes rank higher when they match.
Agent Skills
Agent Skills provide optimized prompts that help AI assistants use RAG tools more effectively. Install skills for better query formulation, result interpretation, and ingestion workflows:
Skills include:
Query optimization: Better search query formulation
Result interpretation: Score thresholds and filtering guidelines
HTML ingestion: Format selection and source naming
Ensuring Skill Activation
Skills are loaded automatically in most cases—AI assistants scan skill metadata and load relevant instructions when needed. For consistent behavior:
Option 1: Explicit request (natural language) Before RAG operations, request in natural language:
"Use the mcp-local-rag skill for this search"
"Apply RAG best practices from skills"
Option 2: Add to agent instruction file
Add to your AGENTS.md, CLAUDE.md, or other agent instruction file:
Environment Variables
Variable | Default | Description |
| Current directory | Document root directory (security boundary) |
|
| Vector database location |
|
| Model cache directory |
|
| HuggingFace model ID (available models) |
|
| Maximum file size in bytes |
Model choice tips:
Multilingual docs → e.g.,
onnx-community/embeddinggemma-300m-ONNX(100+ languages)Scientific papers → e.g.,
sentence-transformers/allenai-specter(citation-aware)Code repositories → default often suffices; keyword boost matters more (or
jinaai/jina-embeddings-v2-base-code)
⚠️ Changing MODEL_NAME changes embedding dimensions. Delete DB_PATH and re-ingest after switching models.
Client-Specific Setup
Cursor — Global: ~/.cursor/mcp.json, Project: .cursor/mcp.json
Codex — ~/.codex/config.toml (note: must use mcp_servers with underscore)
Claude Code:
First Run
The embedding model (~90MB) downloads on first use. Takes 1-2 minutes, then works offline.
Security
Path restriction: Only files within
BASE_DIRare accessibleLocal only: No network requests after model download
Model source: Official HuggingFace repository (verify here)
Tested on MacBook Pro M1 (16GB RAM), Node.js 22:
Query Speed: ~1.2 seconds for 10,000 chunks (p90 < 3s)
Ingestion (10MB PDF):
PDF parsing: ~8s
Chunking: ~2s
Embedding: ~30s
DB insertion: ~5s
Memory: ~200MB idle, ~800MB peak (50MB file ingestion)
Concurrency: Handles 5 parallel queries without degradation.
"No results found"
Documents must be ingested first. Run "List all ingested files" to verify.
Model download failed
Check internet connection. If behind a proxy, configure network settings. The model can also be downloaded manually.
"File too large"
Default limit is 100MB. Split large files or increase MAX_FILE_SIZE.
Slow queries
Check chunk count with status. Large documents with many chunks may slow queries. Consider splitting very large files.
"Path outside BASE_DIR"
Ensure file paths are within BASE_DIR. Use absolute paths.
MCP client doesn't see tools
Verify config file syntax
Restart client completely (Cmd+Q on Mac for Cursor)
Test directly:
npx mcp-local-ragshould run without errors
Is this really private? Yes. After model download, nothing leaves your machine. Verify with network monitoring.
Can I use this offline? Yes, after the first model download (~90MB).
How does this compare to cloud RAG? Cloud services offer better accuracy at scale but require sending data externally. This trades some accuracy for complete privacy and zero runtime cost.
What file formats are supported?
PDF, DOCX, TXT, Markdown, and HTML (via ingest_data). Not yet: Excel, PowerPoint, images.
Can I change the embedding model? Yes, but you must delete your database and re-ingest all documents. Different models produce incompatible vector dimensions.
GPU acceleration? Transformers.js runs on CPU. GPU support is experimental. CPU performance is adequate for most use cases.
Multi-user support? No. Designed for single-user, local access. Multi-user would require authentication/access control.
How to backup?
Copy DB_PATH directory (default: ./lancedb/).
Building from Source
Testing
Code Quality
Project Structure
Contributing
Contributions welcome. Before submitting a PR:
Run tests:
pnpm testCheck quality:
pnpm run check:allAdd tests for new features
Update docs if behavior changes
License
MIT License. Free for personal and commercial use.
Blog Posts
Building a Local RAG for Agentic Coding — Technical deep-dive into the semantic chunking and hybrid search design.
Acknowledgments
Built with Model Context Protocol by Anthropic, LanceDB, and Transformers.js.