Integrates OpenAI's GPT models into collaborative workflows, facilitating parallel processing and side-by-side comparison of AI-generated content.
Provides performance monitoring and metrics collection through an OpenTelemetry-compatible system for tracking tool utilization and provider performance.
Features a Redis-compatible caching system for intelligent response aggregation and efficient storage of collaboration history.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Claude Code AI Collaboration MCP ServerCompare DeepSeek and OpenAI responses for this function."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Claude Code AI Collaboration MCP Server
A powerful Model Context Protocol (MCP) server that enables AI collaboration through multiple providers with advanced strategies and comprehensive tooling.
๐ Features
๐ค Multi-Provider AI Integration
DeepSeek: Primary provider with optimized performance
OpenAI: GPT models integration
Anthropic: Claude models support
O3: Next-generation model support
๐ Advanced Collaboration Strategies
Parallel: Execute requests across multiple providers simultaneously
Sequential: Chain provider responses for iterative improvement
Consensus: Build agreement through multiple provider opinions
Iterative: Refine responses through multiple rounds
๐ ๏ธ Comprehensive MCP Tools
collaborate: Multi-provider collaboration with strategy selection
review: Content analysis and quality assessment
compare: Side-by-side comparison of multiple items
refine: Iterative content improvement
๐ Enterprise Features
Caching: Memory and Redis-compatible caching system
Metrics: OpenTelemetry-compatible performance monitoring
Search: Full-text search with inverted indexing
Synthesis: Intelligent response aggregation
๐ Quick Start
๐ New to MCP? Check out our Quick Start Guide for a 5-minute setup!
Prerequisites
Node.js 18.0.0 or higher
pnpm 8.0.0 or higher
TypeScript 5.3.0 or higher
Installation
Configuration
Environment Variables:
# Required: Set your API keys export DEEPSEEK_API_KEY="your-deepseek-api-key" export OPENAI_API_KEY="your-openai-api-key" export ANTHROPIC_API_KEY="your-anthropic-api-key" # Optional: Configure other settings export MCP_DEFAULT_PROVIDER="deepseek" export MCP_PROTOCOL="stdio"Configuration Files:
config/default.yaml: Default configurationconfig/development.yaml: Development settingsconfig/production.yaml: Production settings
Running the Server
๐ Claude Code Integration
Connecting to Claude Code
To use this MCP server with Claude Code, you need to configure Claude Code to recognize and connect to your server.
1. Automated Setup (Recommended)
Use the automated setup script for easy configuration:
The setup script will:
โ Build the MCP server
โ Create Claude Code configuration file
โ Test the server connection
โ Provide next steps
1b. Manual Setup
If you prefer manual setup:
2. Configure Claude Code
Create or update the Claude Code configuration file:
Note: There are two server options:
simple-server.js- Simple implementation with DeepSeek only (recommended for testing)index.js- Full implementation with all providers and features
macOS/Linux:
Windows:
3. Configuration Options
4. Available Tools in Claude Code
After restarting Claude Code, you'll have access to these powerful tools:
๐ค collaborate - Multi-provider AI collaboration
๐ review - Content analysis and quality assessment
โ๏ธ compare - Side-by-side comparison of multiple items
โจ refine - Iterative content improvement
5. Usage Examples in Claude Code
6. Troubleshooting
Check MCP server connectivity:
View logs:
Verify Claude Code configuration:
Restart Claude Code completely
In a new conversation, ask "What tools are available?"
You should see the four MCP tools listed
Test with a simple command like "Use collaborate to say hello"
7. Configuration File Locations
macOS:
~/.config/claude-code/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.jsonLinux:
~/.config/claude-code/claude_desktop_config.json
๐ Usage
MCP Tools
Collaborate Tool
Execute multi-provider collaboration with strategy selection:
Review Tool
Analyze content quality and provide detailed feedback:
Compare Tool
Compare multiple items with detailed analysis:
Refine Tool
Iteratively improve content quality:
Available Resources
collaboration_history: Access past collaboration results
provider_stats: Monitor provider performance metrics
tool_usage: Track tool utilization statistics
๐๏ธ Architecture
Core Components
Design Principles
Dependency Injection: Clean architecture with InversifyJS
Strategy Pattern: Pluggable collaboration strategies
Provider Abstraction: Unified interface for different AI services
Performance: Efficient caching and rate limiting
Observability: Comprehensive metrics and logging
Extensibility: Easy to add new providers and strategies
๐ง Configuration
Configuration Schema
The server uses YAML configuration files with JSON Schema validation. See config/schema.json for the complete schema.
Key Configuration Sections
Server: Basic server settings (name, version, protocol)
Providers: AI provider configurations and credentials
Strategies: Strategy-specific settings and timeouts
Cache: Caching behavior (memory, Redis, file)
Metrics: Performance monitoring settings
Logging: Log levels and output configuration
Environment Variables
Variable | Description | Default |
| DeepSeek API key | Required |
| OpenAI API key | Optional |
| Anthropic API key | Optional |
| O3 API key (defaults to OPENAI_API_KEY) | Optional |
| Transport protocol |
|
| Default AI provider |
|
| Environment mode |
|
| Logging level |
|
๐ Monitoring & Metrics
Built-in Metrics
Request Metrics: Response times, success rates, error counts
Provider Metrics: Individual provider performance
Tool Metrics: Usage statistics per MCP tool
Cache Metrics: Hit rates, memory usage
System Metrics: CPU, memory, and resource utilization
OpenTelemetry Integration
The server supports OpenTelemetry for distributed tracing and metrics collection:
๐งช Testing
Test Coverage
Unit Tests: 95+ individual component tests
Integration Tests: End-to-end MCP protocol testing
E2E Tests: Complete workflow validation
API Tests: Direct provider API validation
Running Tests
๐ข Deployment
Docker
Production Considerations
Load Balancing: Multiple server instances for high availability
Caching: Redis for distributed caching
Monitoring: Prometheus/Grafana for metrics visualization
Security: API key rotation and rate limiting
Backup: Regular configuration and data backups
๐ค Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Development Setup
๐ Roadmap
Version 1.1
GraphQL API support
WebSocket transport protocol
Advanced caching strategies
Custom strategy plugins
Version 1.2
Multi-tenant support
Enhanced security features
Performance optimizations
Additional AI providers
Version 2.0
Distributed architecture
Advanced workflow orchestration
Machine learning optimization
Enterprise SSO integration
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Support
Documentation: Wiki
Issues: GitHub Issues
Discussions: GitHub Discussions
๐ Acknowledgments
Model Context Protocol for the foundational protocol
InversifyJS for dependency injection
TypeScript for type safety
All AI provider APIs for enabling collaboration
Built with โค๏ธ by the Claude Code AI Collaboration Team# think_hub