Enables querying Google's Gemini language models (including Gemini 2.5 Flash and Gemini 2.0 Flash) through OpenAI-compatible API endpoints
Enables interaction with locally-hosted language models through Ollama's OpenAI-compatible API endpoint for private AI conversations
Provides direct integration with OpenAI's language models including GPT-4 and GPT-3.5 for AI conversations and queries
Provides access to Perplexity's online language models with web search capabilities through OpenAI-compatible endpoints
🦆 MCP Rubber Duck
An MCP (Model Context Protocol) server that acts as a bridge to query multiple OpenAI-compatible LLMs. Just like rubber duck debugging, explain your problems to various AI "ducks" and get different perspectives!
Features
- 🔌 Universal OpenAI Compatibility: Works with any OpenAI-compatible API endpoint
- 🦆 Multiple Ducks: Configure and query multiple LLM providers simultaneously
- 💬 Conversation Management: Maintain context across multiple messages
- 🏛️ Duck Council: Get responses from all your configured LLMs at once
- 💾 Response Caching: Avoid duplicate API calls with intelligent caching
- 🔄 Automatic Failover: Falls back to other providers if primary fails
- 📊 Health Monitoring: Real-time health checks for all providers
- 🎨 Fun Duck Theme: Rubber duck debugging with personality!
Supported Providers
Any provider with an OpenAI-compatible API endpoint, including:
- OpenAI (GPT-4, GPT-3.5)
- Google Gemini (Gemini 2.5 Flash, Gemini 2.0 Flash)
- Anthropic (via OpenAI-compatible endpoints)
- Groq (Llama, Mixtral, Gemma)
- Together AI (Llama, Mixtral, and more)
- Perplexity (Online models with web search)
- Anyscale (Open source models)
- Azure OpenAI (Microsoft-hosted OpenAI)
- Ollama (Local models)
- LM Studio (Local models)
- Custom (Any OpenAI-compatible endpoint)
Quick Start
For Claude Desktop Users
👉 Complete Claude Desktop setup instructions below in Claude Desktop Configuration
Installation
Prerequisites
- Node.js 20 or higher
- npm or yarn
- At least one API key for a supported provider
Install from Source
Configuration
Method 1: Environment Variables
Create a .env
file in the project root:
Note: Duck nicknames are completely optional! If you don't set them, you'll get the charming defaults (GPT Duck, Gemini Duck, etc.). If you use a config.json
file, those nicknames take priority over environment variables.
Method 2: Configuration File
Create a config/config.json
file based on the example:
Claude Desktop Configuration
This is the most common setup method for using MCP Rubber Duck with Claude Desktop.
Step 1: Build the Project
First, ensure the project is built:
Step 2: Configure Claude Desktop
Edit your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add the MCP server configuration:
Important: Replace the placeholder API keys with your actual keys:
your-openai-api-key-here
→ Your OpenAI API key (starts withsk-
)your-gemini-api-key-here
→ Your Gemini API key from Google AI Studio
Step 3: Restart Claude Desktop
- Completely quit Claude Desktop (⌘+Q on Mac)
- Launch Claude Desktop again
- The MCP server should connect automatically
Step 4: Test the Integration
Once restarted, test these commands in Claude:
Check Duck Health
Should show:
- ✅ GPT Duck (openai) - Healthy
- ✅ Gemini Duck (gemini) - Healthy
List Available Models
Ask a Specific Duck
Compare Multiple Ducks
Test Specific Models
Troubleshooting Claude Desktop Setup
If Tools Don't Appear
- Check API Keys: Ensure your API keys are correctly entered without typos
- Verify Build: Run
ls -la dist/index.js
to confirm the project built successfully - Check Logs: Look for errors in Claude Desktop's developer console
- Restart: Fully quit and restart Claude Desktop after config changes
Connection Issues
- Config File Path: Double-check you're editing the correct config file path
- JSON Syntax: Validate your JSON syntax (no trailing commas, proper quotes)
- Absolute Paths: Ensure you're using the full absolute path to
dist/index.js
- File Permissions: Verify Claude Desktop can read the dist directory
Health Check Failures
If ducks show as unhealthy:
- API Keys: Verify keys are valid and have sufficient credits/quota
- Network: Check internet connection and firewall settings
- Rate Limits: Some providers have strict rate limits for new accounts
Available Tools
🦆 ask_duck
Ask a single question to a specific LLM provider.
💬 chat_with_duck
Have a conversation with context maintained across messages.
📋 list_ducks
List all configured providers and their health status.
📊 list_models
List available models for LLM providers.
🔍 compare_ducks
Ask the same question to multiple providers simultaneously.
🏛️ duck_council
Get responses from all configured ducks - like a panel discussion!
Usage Examples
Basic Query
Conversation
Compare Responses
Duck Council
Provider-Specific Setup
Ollama (Local)
LM Studio (Local)
- Download LM Studio from https://lmstudio.ai/
- Load a model in LM Studio
- Start the local server (provides OpenAI-compatible endpoint at localhost:1234/v1)
Google Gemini
- Get API key from Google AI Studio
- Add to environment:
GEMINI_API_KEY=...
- Uses OpenAI-compatible endpoint (beta)
Groq
- Get API key from https://console.groq.com/keys
- Add to environment:
GROQ_API_KEY=gsk_...
Together AI
- Get API key from https://api.together.xyz/
- Add to environment:
TOGETHER_API_KEY=...
Verifying OpenAI Compatibility
To check if a provider is OpenAI-compatible:
- Look for
/v1/chat/completions
endpoint in their API docs - Check if they support the OpenAI SDK
- Test with curl:
Development
Run in Development Mode
Run Tests
Lint Code
Type Checking
Docker Support
Build Docker Image
Run with Docker
Architecture
Troubleshooting
Provider Not Working
- Check API key is correctly set
- Verify endpoint URL is correct
- Run health check:
list_ducks({ check_health: true })
- Check logs for detailed error messages
Connection Issues
- For local providers (Ollama, LM Studio), ensure they're running
- Check firewall settings for local endpoints
- Verify network connectivity to cloud providers
Rate Limiting
- Enable caching to reduce API calls
- Configure failover to alternate providers
- Adjust
max_retries
andtimeout
settings
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
License
MIT License - see LICENSE file for details
Acknowledgments
- Inspired by the rubber duck debugging method
- Built on the Model Context Protocol (MCP)
- Uses OpenAI SDK for universal compatibility
Support
- Report issues: https://github.com/yourusername/mcp-rubber-duck/issues
- Documentation: https://github.com/yourusername/mcp-rubber-duck/wiki
- Discussions: https://github.com/yourusername/mcp-rubber-duck/discussions
🦆 Happy Debugging with your AI Duck Panel! 🦆
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
An MCP server that acts as a bridge to query multiple OpenAI-compatible LLMs with MCP tool access. Just like rubber duck debugging, explain your problems to various AI "ducks" who can actually research and get different perspectives!