The Orchestrator MCP is an intelligent server that coordinates multiple MCP servers and provides AI-enhanced workflow automation with production-ready context analysis capabilities.
AI-Powered Workflow Automation: Process complex requests in natural language via the ai_process tool, which automatically selects, coordinates, and executes multi-step workflows across connected server tools with intelligent routing and result synthesis.
Deep Code & Context Analysis: Analyze large codebases (50K+ characters, supporting 1M+ tokens) with 95% confidence, identify real vs placeholder implementations, map file relationships, assess code quality, and provide architectural insights.
Multi-Server Orchestration: Connect and manage 6+ specialized MCP servers simultaneously including filesystem, git, knowledge graph memory, sequential thinking, browser automation (Puppeteer), and privacy-focused web search (DuckDuckGo).
File & Development Operations: Read, write, and search files; review git history and repository status; find and organize TODO comments; analyze project structures and version control patterns.
Web Research & Automation: Perform DuckDuckGo searches, fetch and analyze web content, automate browser tasks (screenshots, scraping), and integrate findings into workflows.
System Introspection: Use get_info to discover available capabilities, connected servers, and tool inventory; use ai_status to check AI orchestration health, model configuration, and system readiness.
Production-Ready Features: Multi-runtime support (npm, uvx, Python), performance optimized execution (30s for complex analysis), graceful fallback when AI unavailable, and universal tool access through the primary interface.
Provides web search capabilities for retrieving current information through DuckDuckGo's search engine
Supports loading environment variables from .env files for local development and testing
Provides Git repository tools and operations, including repository management, status checking, and commit history analysis
Official GitHub API integration for repository management and operations, requires a GitHub Personal Access Token (GITHUB_TOKEN)
Leverages Gemini's large token context capabilities (1M+ tokens) for extensive context analysis
Enables browser automation for web interactions and testing as an alternative automation solution
Offers Slack integration capabilities when configured with appropriate Slack Bot and App tokens
Orchestrator MCP
An intelligent MCP (Model Context Protocol) server that orchestrates multiple MCP servers and provides AI-enhanced workflow automation with production-ready context engine capabilities.
š Features
Core Orchestration
Multi-Server Orchestration: Connect to multiple MCP servers simultaneously
Universal Compatibility: Works with npm, uvx, Python, and other MCP server types
Server Management: Dynamic server discovery and health monitoring
Scalable Architecture: Easy to add new servers and capabilities
š§ AI Enhancement Layer
Intelligent Tool Routing: AI analyzes requests and selects optimal tools
Workflow Automation: Multi-step processes orchestrated automatically
Intent Understanding: Natural language request analysis and planning
Context Synthesis: Combines results from multiple tools into coherent responses
Result Enhancement: AI improves and formats outputs for better user experience
šÆ Context Engine (PRODUCTION READY!)
Large Context Analysis: Process 50K+ characters using Gemini's 1M+ token context
Intelligent Code Understanding: AI-powered codebase analysis with 95% confidence
Real-time File Discovery: Dynamic file loading and relationship mapping
Quality Assessment: Identify placeholder vs real implementations
Performance Optimized: 30s execution time for complex analysis
Built-in Capabilities
Web Search: DuckDuckGo search for current information
Fallback Mode: Graceful degradation when AI is not available
Related MCP server: Kiro MCP Memory
Current Status
š PRODUCTION READY - Context Engine Complete!
ā Context Engine: 85.7% quality score, 95% analysis confidence ā AI Enhancement Layer: Complete with intelligent routing and workflow automation ā Multi-Server Orchestration: 6/6 MCP servers connected and functional
š Quick Start
Install dependencies:
npm installBuild the project:
npm run buildConfigure in your MCP client (e.g., Claude Desktop, VS Code):
See the example configuration files in the
examples/directory:examples/claude-desktop-config.json- For Claude Desktopexamples/vscode-mcp.json- For VS Code
Start using the orchestrator through your MCP client!
MCP Integration
For Stdio MCP Server:
Name:
Orchestrator MCPCommand:
nodeArguments:
/path/to/orchestrator-mcp/dist/index.js
For Development:
Command:
npxArguments:
orchestrator-mcp(after publishing to npm)
š ļø Available Tools
Core AI Enhancement Tools
The orchestrator exposes a minimal set of tools focused on unique capabilities that enhance AI assistants:
ai_process- Primary Interface - Process requests using AI orchestration with intelligent tool selectionget_info- System introspection - Get information about connected servers and available capabilitiesai_status- Health monitoring - Get the status of AI orchestration capabilities
Connected Server Tools
All tools from connected MCP servers are automatically available through AI orchestration:
Filesystem operations (read, write, search files)
Git operations (repository management, status, history)
Memory system (knowledge graph storage)
Web search (DuckDuckGo search for current information)
Browser automation (Puppeteer for web scraping and automation)
Sequential thinking (Dynamic problem-solving through thought sequences)
And more...
š Connected Servers
Currently enabled servers:
filesystem (npm) - File operations with secure access controls
sequential-thinking (npm) - Dynamic problem-solving through thought sequences
git (uvx) - Git repository tools and operations
memory (npm) - Knowledge graph-based persistent memory
puppeteer (npm) - Browser automation and web scraping
duckduckgo-search (npm) - Privacy-focused web search
š¤ AI Configuration
To enable AI features, you need an OpenRouter API key. Additional API keys can be configured for enhanced integrations:
Required for AI features: Get an API key from OpenRouter
Optional integrations: None currently required - all enabled servers work without additional API keys
Configure the API keys in your MCP client settings:
For Claude Desktop (
~/.claude_desktop_config.json):{ "mcpServers": { "Orchestrator MCP": { "command": "node", "args": ["/path/to/project/dist/index.js"], "env": { "OPENROUTER_API_KEY": "your_api_key_here", "OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-sonnet", "OPENROUTER_MAX_TOKENS": "2000", "OPENROUTER_TEMPERATURE": "0.7" } } } }For VS Code (
.vscode/mcp.json):{ "inputs": [ { "type": "promptString", "id": "openrouter-key", "description": "OpenRouter API Key", "password": true } ], "servers": { "Orchestrator MCP": { "type": "stdio", "command": "node", "args": ["/path/to/project/dist/index.js"], "env": { "OPENROUTER_API_KEY": "${input:openrouter-key}", "OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-sonnet", "OPENROUTER_MAX_TOKENS": "2000", "OPENROUTER_TEMPERATURE": "0.7" } } } }
AI Models Supported
The orchestrator works with any model available on OpenRouter, including:
Anthropic Claude (recommended)
OpenAI GPT models
Meta Llama models
Google Gemini models
And many more!
š Usage Examples
šÆ Context Engine (Production Ready!)
Primary AI Interface
System Introspection
AI-Enhanced Workflows
The ai_process tool can handle complex requests like:
"Analyze my project structure and suggest improvements"
"Find recent commits and create a summary"
"Search for TODO comments and organize them by priority"
"Take a screenshot of the homepage and analyze its performance"
šļø Architecture
Multi-Runtime Support
The orchestrator uses a registry-based architecture supporting:
npm servers: TypeScript/JavaScript servers via npx
uvx servers: Python servers via uvx
Built-in tools: Native orchestrator capabilities
AI Enhancement Layer
āļø Configuration
Server Configuration
Server configurations are managed in src/orchestrator/server-configs.ts. Each server includes:
Runtime environment (npm, uvx, python, etc.)
Command and arguments
Environment requirements
Enable/disable status
Development phase assignment
Environment Variables
All environment variables are configured through your MCP client settings. The following variables are supported:
AI Configuration (OpenRouter):
OPENROUTER_API_KEY(required for AI features) - Your OpenRouter API keyOPENROUTER_DEFAULT_MODEL(optional) - Default model to use (default: "anthropic/claude-3.5-sonnet")OPENROUTER_MAX_TOKENS(optional) - Maximum tokens per request (default: "2000")OPENROUTER_TEMPERATURE(optional) - Temperature for AI responses (default: "0.7")
MCP Server Integrations: All currently enabled MCP servers work without additional API keys or configuration.
š§ Development
Scripts
npm run build- Build the projectnpm run dev- Watch mode for development (TypeScript compilation)npm run start- Start the server (for MCP client use)npm run start:dev- Start with .env file support (for local development/testing)npm test- Run tests (when available)
Local Development
For local development and testing, you can use the development script that loads environment variables from a .env file:
Copy the example environment file:
cp .env.example .envEdit
.envwith your actual API keysRun the development server:
npm run start:dev
Note: The regular npm start command is intended for MCP client use and expects environment variables to be provided by the MCP client configuration.
š License
MIT