Token Saver MCP — AI as a Full-Stack Developer
Transform AI from a code suggester into a true full-stack developer — with instant access to code intelligence and real browser control.
📚 Full Usage Guide & Examples → |
📖 Detailed Technical README → |
🔄 Releases
🚀 What is Token Saver MCP?
Modern AI coding assistants waste enormous context (and your money) by stuffing full grep/search results into the model window. That leads to:
- ❌ Slow lookups (seconds instead of milliseconds)
- ❌ Thousands of wasted tokens per query
- ❌ AI “losing its train of thought” in cluttered context
Token Saver MCP fixes this.
It gives AI assistants direct access to VSCode’s Language Server Protocol (LSP) and the Chrome DevTools Protocol (CDP), so they can work like real developers:
- Instantly navigate & refactor code
- Run code in a real browser (Edge/Chrome)
- Test, debug, and verify changes themselves
Result: 90–99% fewer tokens, 100–1000× faster responses, and $200+ in monthly savings — while enabling AI to truly act as a full-stack engineer.
✨ Why Token Saver?
Think of your AI’s context window like a workbench. If it’s cluttered with logs, search dumps, and irrelevant snippets, the AI can’t focus.
Token Saver MCP keeps the workbench clean.
🔍 Without Token Saver
⚡ With Token Saver
Cleaner context = a sharper, more persistent AI assistant.
🏗️ Revolutionary Dual Architecture
Token Saver MCP uses a split architecture designed for speed and stability:
- 🏗️ VSCode Gateway Extension
- Installed once, rarely updated
- Exposes VSCode’s LSP via HTTP (port 9600)
- 🚀 Standalone MCP Server
- Hot reloadable — no VSCode restarts
- Language-agnostic (JS/TS, Python, Go, Rust…)
- Bridges MCP protocol ↔ VSCode Gateway + CDP (port 9700 by default)
Why it matters: You can iterate on MCP tools instantly without rebuilding/restarting VSCode. Development is 60× faster and much more reliable.
🧰 What You Get
Token Saver MCP currently provides 40 production-ready tools across five categories:
- LSP Tools (14) →
get_definition
,get_references
,rename_symbol
,get_hover
,find_implementations
, … - Memory Tools (9) →
smart_resume
(86-99% token savings vs /resume),write_memory
,read_memory
,search_memories
(full-text search),export_memories
,import_memories
, … - Browser Tools (8) →
navigate_browser
,execute_in_browser
,take_screenshot
,get_browser_console
, … - Testing Helpers (5) →
test_react_component
,test_api_endpoint
,check_page_performance
, … - System Tools (4) →
get_instructions
,retrieve_buffer
,get_supported_languages
, …
📚 See the full Usage Guide with JSON examples →
📊 Proven Results
Operation | Traditional Method | With Token Saver MCP | Improvement |
---|---|---|---|
Find function definition | 5–10s, 5k tokens | 10ms, 50 tokens | 100× faster |
Find all usages | 10–30s | 50ms | 200× faster |
Rename symbol project-wide | Minutes | 100ms | 1000× faster |
Resume context (/resume) | 5000+ tokens | 200-500 tokens | 86-99% savings |
Token & Cost Savings (GPT-4 pricing):
- Tokens per search: 5,000 → 50
- Cost per search: $0.15 → $0.0015
- Typical dev workflow: $200+ saved per month
🌐 Browser Control (Edge-Optimized)
Beyond backend code, Token Saver MCP empowers AI to control a real browser through CDP:
- Launch Edge/Chrome automatically
- Click, type, navigate, capture screenshots
- Run frontend tests & debug JS errors in real-time
- Analyze performance metrics
Example workflow:
- AI writes backend API (LSP tools)
- AI launches browser & tests API (CDP tools)
- AI sees error logs instantly
- AI fixes backend code (LSP tools)
- AI verifies fix in browser
➡️ No more “please test this manually” — AI tests itself.
🧠 Smart Memory System (NEW!)
Replace wasteful /resume
commands with intelligent context restoration:
The Problem with /resume
- Dumps entire conversation history (5000+ tokens)
- Includes irrelevant tangents and discussions
- Costs $0.15+ per resume
- AI gets lost in the noise
The Solution: Smart Resume
Features:
- 86-99% token savings compared to /resume
- Progressive disclosure: Start minimal, expand as needed
- Full-text search: Find memories by content, not just keys
- Importance levels (1-5): Critical info persists, trivia can be dropped
- Verbosity levels (1-4): Control detail granularity
- Time-based filtering: Resume work from specific periods
- Export/Import: Backup and share memory contexts between sessions
Example:
Memory is stored locally in SQLite (~/.token-saver-mcp/memory.db
) with automatic initialization.
🖥️ Real-Time Dashboard
Visit http://127.0.0.1:9700/dashboard
to monitor:
- Server status & connection health
- Request metrics & response times
- Token & cost savings accumulating live
- Tool usage statistics
Perfect for seeing your AI’s efficiency gains in action.
⚡ Quickstart (30 Seconds)
That’s it! The installer:
- Finds open ports
- Creates config files
- Tests connection
- Provides the Claude/Gemini command
➡️ Full installation & build steps: Detailed README →
🔌 Supported AI Assistants
- Claude Code → works out of the box with MCP endpoint
- Gemini CLI → use
/mcp-gemini
endpoint - Other AI tools → MCP JSON-RPC, streaming, or simple REST endpoints available
Endpoints include:
http://127.0.0.1:9700/mcp
(standard MCP)http://127.0.0.1:9700/mcp-gemini
(Gemini)http://127.0.0.1:9700/mcp/simple
(REST testing)http://127.0.0.1:9700/dashboard
(metrics UI)
🔬 Verify It Yourself
Think the claims are too good to be true? Run the built-in test suite:
Expected output shows: hover, completions, definitions, references, diagnostics, semantic tokens, buffer management, etc. — all passing ✅
🛠️ Development
MCP server lives in /mcp-server/
, with modular tools organized by category (lsp/
, cdp/
, helper/
, system/
).
See Full Technical README → for architecture diagrams, tool JSON schemas, buffer system details, and contributing guide.
📍 Roadmap / Vision
Token Saver MCP already unlocks full-stack AI workflows. Next up:
- 🔧 More browser automation tools (multi-tab, network control)
- 📦 Plugin ecosystem for custom toolpacks
- 🌐 Multi-assistant coordination (Claude + Gemini + others)
- 🧠 Expanded context management strategies
📄 License
MIT — free for personal and commercial use.
👉 Start today:
- Run
./mcp setup
- Tell your AI: “Use the get_instructions tool to understand Token Saver MCP.”
- Watch your AI become a focused, cost-efficient, full-stack developer.
📚 For in-depth details:
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Bridges VSCode's Language Server Protocol with MCP to give AI assistants instant access to code intelligence, delivering 100-1000x faster responses with 90% fewer tokens than traditional text-based searching. Provides 17 production-ready tools for navigation, refactoring, diagnostics, and code analysis.
- 🚀 What is Token Saver MCP?
- ✨ Why Token Saver?
- 🏗️ Revolutionary Dual Architecture
- 🧰 What You Get
- 📊 Proven Results
- 🌐 Browser Control (Edge-Optimized)
- 🧠 Smart Memory System (NEW!)
- 🖥️ Real-Time Dashboard
- ⚡ Quickstart (30 Seconds)
- 🔌 Supported AI Assistants
- 🔬 Verify It Yourself
- 🛠️ Development
- 📍 Roadmap / Vision
- 📄 License
Related MCP Servers
- -securityAlicense-qualityAn MCP server that analyzes codebases and generates contextual prompts, making it easier for AI assistants to understand and work with code repositories.Last updated -13MIT License
- AsecurityFlicenseAqualityAn MCP server that supercharges AI assistants with powerful tools for software development, enabling research, planning, code generation, and project scaffolding through natural language interaction.Last updated -114974
- -securityAlicense-qualityA Code Indexing MCP Server that connects AI coding assistants to external codebases, providing accurate and up-to-date code snippets to reduce mistakes and hallucinations.Last updated -77Apache 2.0
- -securityFlicense-qualityA VSCode extension that enables AI agents to programmatically control VSCode's debugging features through the Model Context Protocol (MCP).Last updated -