Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Semantic Cache MCPSearch for code semantically related to the user authentication flow"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Reduce Claude Code token usage by 80%+ with intelligent file caching.
Semantic Cache MCP is a Model Context Protocol server that eliminates redundant token consumption when Claude reads files. Instead of sending full file contents on every request, it returns diffs for changed files, suppresses unchanged files entirely, and intelligently summarizes large files — all transparently through 12 purpose-built MCP tools.
Features
80%+ Token Reduction — Unchanged files cost ~0 tokens; changed files return diffs only
Three-State Read Model — First read (full + cache), unchanged (message only, 99% savings), modified (diff, 80–95% savings)
Semantic Search — Hybrid BM25 + HNSW vector search via local ONNX embeddings (configurable model, default BAAI/bge-small-en-v1.5), no API keys, works offline
Batch Embedding —
batch_smart_readpre-scans all new/changed files and embeds them in a single model call (N calls → 1)Content Hash Freshness — BLAKE3 hash detects when mtime changes but content is identical (touch, git checkout) — returns cached instead of re-reading
Grep — Regex/literal pattern search across cached files with line numbers and context
Semantic Summarization — 50–80% token savings on large files, structure preserved
DoS Protection — Write size, edit size, and match count limits enforced at every boundary
Installation
Add to Claude Code settings (~/.claude/settings.json):
Option 1 — uvx (always runs latest version):
{
"mcpServers": {
"semantic-cache": {
"command": "uvx",
"args": ["semantic-cache-mcp"]
}
}
}Option 2 — uv tool install:
uv tool install semantic-cache-mcp{
"mcpServers": {
"semantic-cache": {
"command": "semantic-cache-mcp"
}
}
}Restart Claude Code.
GPU Acceleration (Optional)
For NVIDIA GPU acceleration, install with the gpu extra:
uv tool install "semantic-cache-mcp[gpu]"
# or with uvx: uvx "semantic-cache-mcp[gpu]"Then set EMBEDDING_DEVICE=gpu in your MCP config env block. Falls back to CPU automatically if CUDA is unavailable.
Custom Embedding Models
Any HuggingFace model with an ONNX export works — set EMBEDDING_MODEL in your env config:
"env": {
"EMBEDDING_MODEL": "nomic-ai/nomic-embed-text-v1.5"
}If the model isn't in fastembed's built-in list, it's automatically downloaded and registered from HuggingFace Hub on first startup (ONNX file integrity is verified via SHA256). See env_variables.md for model recommendations.
Block Native File Tools (Recommended)
Disable the client's built-in file tools so all file I/O routes through semantic-cache.
Claude Code — add to ~/.claude/settings.json:
{
"permissions": {
"deny": ["Read", "Edit", "Write"]
}
}OpenCode — add to ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"permission": {
"read": "deny",
"edit": "deny",
"write": "deny"
}
}CLAUDE.md Configuration
Add to ~/.claude/CLAUDE.md to enforce semantic-cache globally:
## Tools
- MUST use `semantic-cache-mcp` instead of native I/O tools (80%+ token savings)Tools
Core
Tool | Description |
| Smart file reading with diff-mode. Three states: first read (full + cache), unchanged (99% savings), modified (diff, 80–95% savings). Use |
| Write files with cache integration. |
| Find/replace using cached reads — three modes: full-file, scoped to a line range, or direct line replacement. |
| Up to 50 edits per call with partial success. Each entry can be find/replace, scoped, or line-range replacement. |
Discovery
Tool | Description |
| Semantic/embedding search across cached files by meaning — not keywords. Seed cache first with |
| Finds semantically similar cached files to a given path. Start with |
| Pattern matching with cache status per file. |
| Read 2+ files in one call. Supports glob expansion in paths, priority ordering, token budget, and per-file diff suppression for unchanged files. Pre-scans and batch-embeds all new/changed files in a single model call. Set |
| Regex or literal pattern search across cached files with line numbers and optional context lines. Like ripgrep for the cache. |
| Compare two files. Returns unified diff plus semantic similarity score. Large diffs are auto-summarized to stay within token budget. |
Management
Tool | Description |
| Cache metrics, session usage (tokens saved, tool calls), and lifetime aggregates. |
| Reset all cache entries. |
Tool Reference
read path="/src/app.py"
read path="/src/app.py" diff_mode=true # default
read path="/src/app.py" diff_mode=false # full content (use after context compression)
read path="/src/app.py" offset=120 limit=80 # lines 120–199 onlyThree states:
State | Response | Token cost |
First read | Full content + cached | Normal |
Unchanged |
| ~5 tokens |
Modified | Unified diff only | 5–20% of original |
write path="/src/new.py" content="..."
write path="/src/new.py" content="..." auto_format=true
write path="/src/large.py" content="...chunk1..." append=false # first chunk
write path="/src/large.py" content="...chunk2..." append=true # subsequent chunks# Mode A — find/replace: searches entire file
edit path="/src/app.py" old_string="def foo():" new_string="def foo(x: int):"
edit path="/src/app.py" old_string="..." new_string="..." replace_all=true auto_format=true
# Mode B — scoped find/replace: search only within line range (shorter old_string suffices)
edit path="/src/app.py" old_string="pass" new_string="return x" start_line=42 end_line=42
# Mode C — line replace: replace entire range, no old_string needed (maximum token savings)
edit path="/src/app.py" new_string=" return result\n" start_line=80 end_line=83Mode selection:
Mode | Parameters | Best for |
Find/replace |
| Unique strings, no line numbers known |
Scoped |
| Shorter context when |
Line replace |
| Maximum token savings when line numbers are known |
# Mode A — find/replace: [old, new]
batch_edit path="/src/app.py" edits='[["old1","new1"],["old2","new2"]]'
# Mode B — scoped: [old, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[["pass","return x",42,42]]'
# Mode C — line replace: [null, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[[null," return result\n",80,83]]'
# Mixed modes in one call (object syntax also supported)
batch_edit path="/src/app.py" edits='[
["old1", "new1"],
{"old": "pass", "new": "return x", "start_line": 42, "end_line": 42},
{"old": null, "new": " return result\n", "start_line": 80, "end_line": 83}
]' auto_format=truesearch query="authentication middleware logic" k=5
search query="database connection pooling" k=3similar path="/src/auth.py" k=3
similar path="/tests/test_auth.py" k=5glob pattern="**/*.py" directory="./src"
glob pattern="**/*.py" directory="./src" cached_only=truebatch_read paths="/src/a.py,/src/b.py" max_total_tokens=50000
batch_read paths='["/src/a.py","/src/b.py"]' diff_mode=true priority="/src/main.py"
batch_read paths="/src/*.py" max_total_tokens=30000 diff_mode=falseGlob expansion:
src/*.pyexpanded inline (max 50 files per glob)Priority ordering:
prioritypaths read first, remainder sorted smallest-firstToken budget: stops reading new files once
max_total_tokensreached; skipped files includeest_tokenshintUnchanged suppression: unchanged files appear in
summary.unchangedwith no content (zero tokens)Batch embedding: pre-scans all new/changed files and embeds them in a single model call before reading — N model calls reduced to 1
Context compression recovery: set
diff_mode=falsewhen Claude needs full content after losing context
diff path1="/src/v1.py" path2="/src/v2.py"Configuration
Environment Variables
Variable | Default | Description |
|
| Logging verbosity ( |
|
| Response detail ( |
|
| Global response token cap ( |
|
| Max bytes returned by read operations |
|
| Max cache entries before LRU-K eviction |
|
| Embedding hardware: |
|
| FastEmbed model for search/similarity (options) |
| (platform) | Override cache/database directory path |
See docs/env_variables.md for detailed descriptions, model selection guidance, and examples.
Safety Limits
Limit | Value | Protects Against |
| 10 MB | Memory exhaustion via large writes |
| 10 MB | Memory exhaustion via large file edits |
| 10,000 | CPU exhaustion via unbounded |
MCP Server Config
{
"mcpServers": {
"semantic-cache": {
"command": "uvx",
"args": ["semantic-cache-mcp"],
"env": {
"LOG_LEVEL": "INFO",
"TOOL_OUTPUT_MODE": "compact",
"MAX_CONTENT_SIZE": "100000",
"EMBEDDING_DEVICE": "cpu",
"EMBEDDING_MODEL": "BAAI/bge-small-en-v1.5"
}
}
}
}Cache location: ~/.cache/semantic-cache-mcp/ (Linux), ~/Library/Caches/semantic-cache-mcp/ (macOS), %LOCALAPPDATA%\semantic-cache-mcp\ (Windows). Override with SEMANTIC_CACHE_DIR.
How It Works
┌─────────────┐ ┌──────────────┐ ┌──────────────────┐
│ Claude │────▶│ smart_read │────▶│ Cache Lookup │
│ Code │ │ │ │ (VectorStorage) │
└─────────────┘ └──────────────┘ └──────────────────┘
│
┌─────────────────┼─────────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────────┐
│Unchanged │ │ Changed │ │ New / Large │
│ ~0 tok │ │ diff │ │ summarize or │
│ (99%) │ │ (80-95%) │ │ full content │
└──────────┘ └──────────┘ └──────────────┘Performance
Measured on this project's 30 source files (~136K tokens). Benchmarks run on a standard dev machine (CPU embeddings).
Token Savings
Phase | Scenario | Savings |
Cold read | First read, no cache | 0% (baseline) |
Unchanged re-read | Same files, no modifications | 99.1% |
Content hash | Touch files (mtime changed, content identical) | 99.1% |
Small edits | ~5% of lines changed in 30% of files | 98.1% |
Batch read | All files via | 99.1% |
Search | 5 queries × k=5, previews vs full reads | 98.4% |
Overall (cached) | Phases 2–6 combined | 98.8% |
Operation Latency
Operation | Time |
Unchanged read (single file) | 2 ms |
Unchanged re-read (29 files) | 25 ms |
Batch read (29 files, diff mode) | 35 ms |
Cold read (29 files, incl. embed) | 2,554 ms |
Write (200-line file) | 47 ms |
Edit (scoped find/replace) | 48 ms |
Semantic search (k=5) | 4 ms |
Semantic search (k=10) | 5 ms |
Find similar (k=3) | 49 ms |
Grep (literal) | 1 ms |
Grep (regex) | 2 ms |
Embedding model warmup | 206 ms |
Single embedding (largest file) | 47 ms |
Batch embedding (10 files) | 469 ms |
Run benchmarks yourself:
uv run python benchmarks/benchmark_token_savings.py # token savings
uv run python benchmarks/benchmark_performance.py # operation latencySee docs/performance.md for full benchmarks and methodology.
Documentation
Guide | Description |
Component design, algorithms, data flow | |
Optimization techniques, benchmarks | |
Threat model, input validation, size limits | |
Programmatic API, custom storage backends | |
Common issues, debug logging | |
All configurable env vars with defaults and examples |
Contributing
git clone https://github.com/CoderDayton/semantic-cache-mcp.git
cd semantic-cache-mcp
uv sync
uv run pytestSee CONTRIBUTING.md for commit conventions, pre-commit hooks, and code standards.
License
MIT License — use freely in personal and commercial projects.
Credits
Built with FastMCP 3.0 and:
FastEmbed — local ONNX embeddings (configurable, default BAAI/bge-small-en-v1.5)
SimpleVecDB — HNSW vector storage with FTS5 keyword search
Semantic summarization based on TCRA-LLM (arXiv:2310.15556)
BLAKE3 cryptographic hashing for content freshness
LRU-K frequency-aware cache eviction
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.