Provides semantic analysis, AST parsing, and code compaction for C/C++ codebases with intelligent symbol extraction
Enables searching, listing, and analyzing GitHub repositories through cloud integration, providing structured context and graph-based repository analysis
Offers comprehensive semantic analysis, AST parsing, and intelligent code compaction for JavaScript projects with symbol extraction and project navigation
Provides specialized frontend insights and comprehensive analysis for Next.js applications, including architecture detection and React-specific tooling
Supports Node.js project analysis with semantic compaction, dependency analysis, and intelligent navigation hints for server-side JavaScript applications
Integrates with OpenAI APIs for AI-enhanced code analysis, intelligent context generation, detailed code explanations, and embedding-based semantic search
Supports PostgreSQL integration for local development environments when running the full Ambiance server stack
Provides semantic analysis, AST parsing, and intelligent code compaction for Python codebases with comprehensive symbol extraction and project insights
Offers specialized frontend analysis and insights for React applications, including component structure analysis and React-specific architectural patterns
Enables semantic analysis, AST parsing, and code compaction for Rust projects with intelligent symbol extraction and project navigation
Supports Supabase integration for local development environments when running the full Ambiance server stack with local database instances
Provides comprehensive semantic analysis, AST parsing, and intelligent code compaction for TypeScript projects with advanced symbol extraction and type analysis
Ambiance MCP Server
Intelligent code context and analysis for modern IDEs
MCP server that provides intelligent code context through semantic analysis, AST parsing, and token-efficient compression. Get 60-80% better token efficiency while maintaining full semantic understanding of your codebase.
🚀 Quick Start
1. Install
2. Create Embeddings (Recommended)
Navigate to your project directory and create embeddings for enhanced context analysis:
This step generates local embeddings that enable semantic search and improve context analysis. The process may take 2-10 minutes depending on project size.
3. Configure Your IDE
Windows:
macOS/Linux:
4. Start Using
That's it! Ambiance automatically enables features based on your environment variables:
🚀 Local Embeddings (
USE_LOCAL_EMBEDDINGS=true): Cost-effective, offline-ready🤖 AI Enhancement (
OPENAI_API_KEY): Intelligent context analysis☁️ Cloud Features (
AMBIANCE_API_KEY): GitHub repository integration
✨ Key Features
🧠 60-80% token reduction through semantic compaction
🔍 Multi-language support (TypeScript, JavaScript, Python, Go, Rust)
🚀 Works completely offline - no internet required for core functionality
🎯 Intelligent context analysis with AI enhancement options
📊 Project structure understanding and navigation hints
🔧 Configuration
Environment Variables
Variable | Purpose | Required | Default |
| Project workspace path | ✅ | Auto-detected |
| AI-enhanced tools | ❌ | - |
| Cloud features | ❌ | - |
| Local embedding storage | ❌ |
|
Enhanced Features (Optional)
AI Enhancement:
Cloud Integration:
Local Embeddings:
How Embeddings Work
First-Time Usage:
Embeddings are generated automatically in the background when you first use embedding-enhanced tools like
local_context(whenUSE_LOCAL_EMBEDDINGS=true)Tools return results immediately using AST analysis while embeddings generate in the background
Subsequent queries benefit from the generated embeddings for enhanced context similarity search
Ongoing Updates:
File watcher monitors your project for changes (3-minute debounce)
Only modified files have their embeddings updated
Incremental updates keep embeddings current without full re-indexing
Manual Control:
Use manage_embeddings tool for fine-grained control:
Progress Monitoring:
Use
ambiance-mcp embeddings statusto check if generation is in progressShows real-time progress: files processed, estimated time remaining
Displays elapsed time and completion percentage
🛠️ Available Tools
Core Tools (Always Available)
local_context- Semantic code compaction (60-80% reduction)local_project_hints- Project navigation & architecture detectionlocal_file_summary- AST-based file analysismanage_embeddings- Workspace & embedding management (replacesworkspace_config)local_debug_context- Error analysis & debugging
AI-Enhanced Tools (OpenAI API Required)
ai_get_context- Intelligent context analysisai_project_hints- Enhanced project insightsai_code_explanation- Detailed code documentation
Cloud Tools (Ambiance API Required)
ambiance_search_github_repos- Search GitHub repositoriesambiance_list_github_repos- List available repositoriesambiance_get_context- GitHub repository context
🖥️ Command Line Interface
Ambiance MCP now includes a comprehensive CLI for direct tool execution, perfect for development, testing, and standalone usage without requiring an MCP client.
CLI Tools (No API Keys Required)
All local tools are available via CLI with no external dependencies:
Command | Description | Example |
| Semantic code compaction and context generation |
|
| Project structure analysis and navigation hints |
|
| Individual file analysis and symbol extraction |
|
| Frontend code pattern analysis |
|
| Debug context analysis from error logs |
|
| AST-based structural code search |
|
| Embedding management and workspace configuration |
,
|
Global Options
Option | Description | Example |
| Set project directory |
|
| Output format (json, structured, compact) |
|
| Write output to file |
|
| Enable verbose output |
|
CLI Examples
📖 Documentation
For detailed help and configuration options, run:
For source code and contributions, visit: https://github.com/sbarron/AmbianceMCP
📄 License
MIT License - see LICENSE file for details.
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Provides intelligent code context and analysis through semantic compression, AST parsing, and multi-language support. Offers 60-80% token reduction while enabling AI assistants to understand codebases through local analysis, OpenAI-enhanced insights, and GitHub repository integration.