Provides semantic analysis, AST parsing, and code compaction for C/C++ codebases with intelligent symbol extraction
Enables searching, listing, and analyzing GitHub repositories through cloud integration, providing structured context and graph-based repository analysis
Offers comprehensive semantic analysis, AST parsing, and intelligent code compaction for JavaScript projects with symbol extraction and project navigation
Provides specialized frontend insights and comprehensive analysis for Next.js applications, including architecture detection and React-specific tooling
Supports Node.js project analysis with semantic compaction, dependency analysis, and intelligent navigation hints for server-side JavaScript applications
Integrates with OpenAI APIs for AI-enhanced code analysis, intelligent context generation, detailed code explanations, and embedding-based semantic search
Supports PostgreSQL integration for local development environments when running the full Ambiance server stack
Provides semantic analysis, AST parsing, and intelligent code compaction for Python codebases with comprehensive symbol extraction and project insights
Offers specialized frontend analysis and insights for React applications, including component structure analysis and React-specific architectural patterns
Enables semantic analysis, AST parsing, and code compaction for Rust projects with intelligent symbol extraction and project navigation
Supports Supabase integration for local development environments when running the full Ambiance server stack with local database instances
Provides comprehensive semantic analysis, AST parsing, and intelligent code compaction for TypeScript projects with advanced symbol extraction and type analysis
Ambiance MCP Server
Intelligent code context and analysis for modern IDEs
This is a MCP (Model Context Protocol) server that provides intelligent code context through semantic analysis, AST parsing, and token-efficient compression. Add OpenAI-compatible API keys to unlock AI-powered summarization and analysis tools. Core functionality works completely offline - no internet required for basic use.
Stop wasting time with manual file exploration and context switching. This tool gives AI assistants instant, accurate understanding of your codebase through intelligent semantic search, eliminating the need for endless file reads and grep searches. Get 60-80% better token efficiency while maintaining full semantic understanding of your projects.
🤖 What is MCP?
Model Context Protocol (MCP) enables AI assistants to understand your codebase contextually. Instead of copying/pasting code, MCP servers provide structured access to your project's files, symbols, and relationships.
Ambiance MCP excels at:
60-80% token reduction through semantic compaction
Multi-language analysis (TypeScript, JavaScript, Python, Go, Rust, Java)
Intelligent ranking based on relevance, recency, and importance
Progressive enhancement from local-only to AI-powered to cloud-integrated
✨ Key Features
🧠 Multi-tier Intelligence: Local → OpenAI → Cloud service with graceful fallbacks
🔧 Semantic Compaction: 60-80% token reduction while preserving code meaning
🚀 Zero Dependencies: Core functionality works completely offline
🔍 Multi-Language Support: TypeScript, JavaScript, Python, Go, Rust, and more
📊 Project Analysis: Smart architecture detection and navigation hints
🛡️ Production Ready: Enterprise-grade error handling, logging, and structured error management
🎯 Release Scope
✅ Included in Release (v0.1.0)
Core Local Tools (No API Keys Required):
local_context
- Semantic code compaction with token-efficient compression (60-80% reduction)local_project_hints
- Project navigation with architecture detectionlocal_file_summary
- AST-based file analysis and symbol extractionfrontend_insights
- Comprehensive Next.js/React frontend analysisworkspace_config
- Embedding management and workspace setuplocal_debug_context
- Error analysis and debugging assistanceast_grep_search
- Structural code pattern searching
Enhanced AI Tools (OpenAI API Key Required):
ai_get_context
- AI-optimized context with recursive analysisai_project_insights
- Enhanced project analysis with AI insightsai_code_explanation
- Detailed code documentation and explanations
Cloud Tools (Ambiance API Key Required):
ambiance_search_github_repos
- Search code within indexed GitHub repositoriesambiance_list_github_repos
- List available GitHub repositoriesambiance_get_context
- Get structured context from GitHub repositoriesambiance_get_graph_context
- Graph-based repository context analysis
🔮 Future Releases
Enhanced Features: LMDB storage, incremental parsing, local semantic index
Advanced Tools: Profile/approval enforcement and diagnostics tooling
Integration: ACP bridge and @-mention resolver
Cloud Expansion: Enhanced uploaded project handlers and search capabilities
🚀 Quick Start (5 Minutes)
1. Install & Build
2. Configure Your IDE
Recommended Starting Setup (Local Tools with embeddings)
Minimum Setup (Local and AI Tools (summarization) with embeddings for semantic search)
Basic Setup (Local Tools without embeddings (no semantic search))
3. Start Using
Feature Tiers (based on your setup):
🚀 Local Embeddings (
USE_LOCAL_EMBEDDINGS=true
): Cost-effective, offline-ready🤖 AI Enhancement (
OPENAI_API_KEY
): Intelligent context analysis☁️ Cloud Features: Coming soon - GitHub repository integration
That's it! Ambiance automatically enables features based on your environment variables.
🔧 Configuration Options
Environment Variables
Variable | Purpose | Required | Default |
| Project workspace path | ✅ | Auto-detected |
| AI-enhanced tools | ❌ | - |
| Cloud features | ❌ | - |
| Local embedding storage | ❌ |
|
Enhanced Features (Optional)
AI Enhancement
Add OpenAI API key for intelligent context analysis:
Cloud Integration
Add Ambiance API key for GitHub repository access:
Local Embeddings
Enable cost-effective local embedding storage:
🛠️ Available Tools
Core Tools (Always Available)
Tool | Purpose | API Keys |
| Semantic code compaction (60-80% reduction) | None |
| Project navigation & architecture detection | None |
| AST-based file analysis | None |
| Embedding management & setup | None |
| Error analysis & debugging | None |
AI-Enhanced Tools (OpenAI API Required)
Tool | Purpose | Enhancement |
| Intelligent context analysis | AI optimization |
| Enhanced project insights | AI-powered analysis |
| Detailed code documentation | AI explanations |
Cloud Tools (Ambiance API Required)
Tool | Purpose | Features |
| Search GitHub repositories | Cloud indexing |
| List available repositories | Repository management |
| GitHub repository context | Cloud context |
| Graph-based analysis | Advanced relationships |
Example Usage
Project Navigation
File Analysis
AI-Enhanced Analysis (OpenAI API Required)
GitHub Repository Analysis (Ambiance API Required)
🛠️ Development
Building from Source
Testing
Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests for new functionality
Ensure tests pass:
npm test
Submit a pull request
📊 Performance & Security
60-80% token reduction through semantic compaction
Multi-language support: TypeScript, JavaScript, Python, Go, Rust, Java
Enterprise security: Input validation, path traversal protection
Memory efficient: ~50MB peak during processing
Fast processing: 2-5 seconds for typical projects
Variable | Required | Default | Description |
OpenAI Integration | |||
| Required for OpenAI tools | - | OpenAI API key |
| Optional |
| OpenAI-compatible API endpoint |
| Optional |
| Primary model for analysis tasks |
| Optional |
| Faster model for hints/summaries |
| Optional |
| Model for generating embeddings |
| Optional | - | OpenAI organization ID |
| Optional |
| Provider:
,
,
,
,
|
Ambiance Cloud Service | |||
| Required for cloud tools | - | Ambiance cloud API key |
| Optional |
| Ambiance cloud API URL |
| Optional | - | Device identification token |
Local Server | |||
| Optional | - | Use local Ambiance server instead of cloud |
Local Storage | |||
| Optional |
| Enable local embedding storage |
| Optional |
| Local embedding model when using local embeddings. When set with
, overrides cloud providers for cost-effective local embeddings |
| Optional |
| Custom local storage path |
| Optional |
| Number of texts per embedding batch |
| Optional |
| Enable parallel embedding generation |
| Optional |
| Max concurrent API calls for parallel mode |
| Optional |
| Max retries for rate limit errors |
| Optional |
| Base delay for rate limit retries (ms) |
Workspace | |||
| Critical for Cursor | Auto-detected | Project workspace path |
| Optional | Current directory | Override working directory |
Development | |||
| Optional |
| Enable debug logging |
| Optional | - | Affects logging behavior (
,
,
) |
Configuration Tiers
Tier 1: Local Only (No API keys required)
✅
local_context
,local_project_hints
,local_file_summary
✅ Works completely offline
✅ 60-80% semantic compression
✅ Cost-effective local embeddings with
USE_LOCAL_EMBEDDINGS=true
Tier 2: Enhanced (OpenAI API key)
✅ All Tier 1 tools
✅
ambiance_get_context
with AI optimization✅ Enhanced
ambiance_project_hints
✅ High-performance parallel embedding generation
✅ OpenAI embeddings (when not using local embeddings)
Tier 3: Full Cloud (Both API keys)
✅ All previous tools
✅ All previous tools
✅
ambiance_setup_project
,ambiance_project_status
✅ Team collaboration features
Embedding Provider Priority
The system intelligently selects embedding providers based on your configuration:
Local Priority (when
USE_LOCAL_EMBEDDINGS=true
andLOCAL_EMBEDDING_MODEL
is set)Uses cost-effective local Transformers.js models like
all-MiniLM-L6-v2
Works completely offline
Overrides cloud providers when explicitly configured
Cloud Priority (when
AMBIANCE_API_KEY
is available)Uses high-performance cloud embeddings (voyage-context-3)
Requires internet connection
OpenAI Fallback (when
OPENAI_API_KEY
is available)Uses OpenAI embeddings (text-embedding-3-small)
Falls back when cloud services unavailable
Pure Local (no API keys)
Uses local Transformers.js models
Completely offline operation
Performance Optimization
Parallel Embedding Generation with Smart Rate Limiting
For large projects, you can significantly speed up embedding generation using parallel processing with intelligent rate limit handling:
Example for large project:
Rate Limit Behavior:
⏳ Rate Limit Hit: Retries with exponential backoff (1s, 2s, 4s, 8s, 16s)
📉 Multiple Hits: Automatically reduces concurrency by half
🔄 Recovery: Gradually increases concurrency after 1 minute
🚫 Fallback: Only switches to local embeddings for permanent failures (not rate limits)
Monitor your OpenAI usage dashboard to ensure you stay within rate limits.
Environment Variables for MCP Server
Common Issues
"No tools available"
Ensure the server is built:
npm run build
Check file paths are absolute in IDE configuration
Verify Node.js version >= 18.0.0
"Tool execution failed"
Check server logs for detailed error messages
Ensure project path exists and is readable
For OpenAI tools, verify API key is valid
For Ambiance cloud tools, check
AMBIANCE_API_KEY
is set, register on website for key.
Debugging Server Issues
🏗️ Development
Building from Source
Contributing
Fork the repository
Create a feature branch:
git checkout -b feature/amazing-feature
Make your changes following our coding standards
Add tests for new functionality
Ensure all tests pass:
npm test
Run performance benchmarks:
npm run benchmark
Commit your changes:
git commit -m 'Add amazing feature'
Push to the branch:
git push origin feature/amazing-feature
Open a Pull Request
Code Quality Standards
✅ TypeScript with strict mode
✅ Comprehensive error handling
✅ Structured logging (no console.log)
✅ >85% test coverage target
✅ Performance benchmarking
🔒 Security
✅ Input validation and sanitization
✅ Path traversal protection
✅ No sensitive data logging
✅ Secure file operations only
✅ API key handling best practices
📄 License
MIT License - see LICENSE file for details.
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Provides intelligent code context and analysis through semantic compression, AST parsing, and multi-language support. Offers 60-80% token reduction while enabling AI assistants to understand codebases through local analysis, OpenAI-enhanced insights, and GitHub repository integration.