mcp-rubber-duck
The mcp-rubber-duck server orchestrates multiple LLMs (OpenAI-compatible APIs and CLI coding agents) for querying, collaboration, and structured AI workflows.
Core Querying
ask_duck: Query a single LLM with optional model/temperature controlchat_with_duck: Multi-turn conversations with persistent context; switch providers mid-conversationcompare_ducks: Send the same prompt to multiple providers simultaneously for side-by-side comparisonduck_council: Get responses from all configured LLMs at once
Collaborative Workflows
duck_vote: Multi-duck voting on 2–10 options with reasoning and confidence scoresduck_judge: One duck evaluates and ranks other ducks' responses using customizable criteriaduck_iterate: Two ducks iteratively refine a response (up to 10 rounds, 'refine' or 'critique-improve' mode)duck_debate: Structured debates in Oxford, Socratic, or adversarial formats with an optional synthesizer duck
Management & Monitoring
list_ducks: View all configured providers with optional real-time health checkslist_models: Browse available models per providerget_usage_stats: Track token usage and estimated costsclear_conversations: Wipe stored conversation history
Additional Features
8 reusable prompt templates (e.g.,
perspectives,tradeoffs,red_team) for structured analysisRich HTML panels for compare, vote, debate, and usage views in supporting clients
Guardrails: rate limiting, token limits, PII redaction
Automatic failover to alternative providers
Vision input support for compatible models
MCP Bridge for connecting to other MCP servers with per-server approval controls
Enables querying Google's Gemini language models (including Gemini 2.5 Flash and Gemini 2.0 Flash) through OpenAI-compatible API endpoints
Enables interaction with locally-hosted language models through Ollama's OpenAI-compatible API endpoint for private AI conversations
Provides direct integration with OpenAI's language models including GPT-4 and GPT-3.5 for AI conversations and queries
Provides access to Perplexity's online language models with web search capabilities through OpenAI-compatible endpoints
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@mcp-rubber-duckCan you help me debug this Python function that's returning None unexpectedly?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Rubber Duck
An MCP (Model Context Protocol) server that acts as a bridge to query multiple LLMs -- both OpenAI-compatible HTTP APIs and CLI coding agents. Just like rubber duck debugging, explain your problems to various AI "ducks" and get different perspectives!
Features
Universal OpenAI Compatibility -- Works with any OpenAI-compatible API endpoint
CLI Agent Support -- Use CLI coding agents (Claude Code, Codex, Gemini CLI, Grok, Aider) as ducks
Multiple Ducks -- Configure and query multiple LLM providers simultaneously
Conversation Management -- Maintain context across multiple messages
Duck Council -- Get responses from all your configured LLMs at once
Consensus Voting -- Multi-duck voting with reasoning and confidence scores
LLM-as-Judge -- Have ducks evaluate and rank each other's responses
Iterative Refinement -- Two ducks collaboratively improve responses
Structured Debates -- Oxford, Socratic, and adversarial debate formats
MCP Prompts -- 8 reusable prompt templates for multi-LLM workflows
Vision Input -- Send images alongside prompts to vision-capable models (docs)
Automatic Failover -- Falls back to other providers if primary fails
Health Monitoring -- Real-time health checks for all providers
Usage Tracking -- Track requests, tokens, and estimated costs per provider
MCP Bridge -- Connect ducks to other MCP servers for extended functionality (docs)
Guardrails -- Pluggable safety layer with rate limiting, token limits, pattern blocking, and PII redaction (docs)
Granular Security -- Per-server approval controls with session-based approvals
Interactive UIs -- Rich HTML panels for compare, vote, debate, and usage tools (via MCP Apps)
Tool Annotations -- MCP-compliant hints for tool behavior (read-only, destructive, etc.)
Structured Output --
outputSchemaon tools returning structured JSON for client-side validation (Cursor, VS Code/Copilot)
Related MCP server: Zen MCP Server
Supported Providers
HTTP Providers (OpenAI-compatible API)
Any provider with an OpenAI-compatible API endpoint, including:
OpenAI (GPT-5.1, o3, o4-mini)
Google Gemini (Gemini 3, Gemini 2.5 Pro/Flash)
Anthropic (via OpenAI-compatible endpoints)
Groq (Llama 4, Llama 3.3)
Together AI (Llama 4, Qwen, and more)
Perplexity (Online models with web search)
Anyscale, Azure OpenAI, Ollama, LM Studio, Custom
CLI Providers (Coding Agents)
Command-line coding agents that run as local processes:
Claude Code (
claude) -- Codex (codex) -- Gemini CLI (gemini) -- Grok CLI (grok) -- Aider (aider) -- Custom
See CLI Providers for full setup and configuration.
Quick Start
# Install globally
npm install -g mcp-rubber-duck
# Or use npx directly in Claude Desktop config
npx mcp-rubber-duckUsing Claude Desktop? Jump to Claude Desktop Configuration. Using Cursor, VS Code, Windsurf, or another tool? See the Setup Guide.
Installation
Prerequisites
Node.js 20 or higher
npm or yarn
At least one API key for an HTTP provider, or a CLI coding agent installed locally
Install from NPM
npm install -g mcp-rubber-duckInstall from Source
git clone https://github.com/nesquikm/mcp-rubber-duck.git
cd mcp-rubber-duck
npm install
npm run build
npm startConfiguration
Create a .env file or config/config.json. Key environment variables:
Variable | Description |
| OpenAI API key |
| Google Gemini API key |
| Groq API key |
| Default provider (e.g., |
| Default temperature (e.g., |
|
|
| Set to |
| Enable MCP Bridge (ducks access external MCP servers) |
| Custom HTTP providers |
| Enable CLI agents ( |
Full reference: Configuration docs
Interactive UIs (MCP Apps)
Four tools -- compare_ducks, duck_vote, duck_debate, and get_usage_stats -- can render rich interactive HTML panels inside supported MCP clients via MCP Apps. Once this MCP server is configured in a supporting client, the UIs appear automatically -- no additional setup is required. Clients without MCP Apps support still receive the same plain text output (no functionality is lost). See the MCP Apps repo for an up-to-date list of supported clients.
Compare Ducks
Compare multiple model responses side-by-side, with latency indicators, token counts, model badges, and error states.
Duck Vote
Have multiple ducks vote on options, displayed as a visual vote tally with bar charts, consensus badge, winner card, confidence bars, and collapsible reasoning.
Duck Debate
Structured multi-round debate between ducks, shown as a round-by-round view with format badge, participant list, collapsible rounds, and synthesis section.
Usage Stats
Usage analytics with summary cards, provider breakdown with expandable rows, token distribution bars, and estimated costs.
Available Tools
Tool | Description |
| Ask a single question to a specific LLM provider |
| Conversation with context maintained across messages |
| Clear all conversation history |
| List configured providers and health status |
| List available models for providers |
| Ask the same question to multiple providers simultaneously |
| Get responses from all configured ducks |
| Usage statistics and estimated costs |
| Multi-duck voting with reasoning and confidence |
| Have one duck evaluate and rank others' responses |
| Iteratively refine a response between two ducks |
| Structured multi-round debate between ducks |
| MCP Bridge status and connected servers |
| Pending MCP tool approval requests |
| Approve or deny a duck's MCP tool request |
Full reference with input schemas: Tools docs
Available Prompts
Prompt | Purpose | Required Arguments |
| Multi-angle analysis with assigned lenses |
|
| Surface hidden assumptions in plans |
|
| Hunt for overlooked risks and gaps |
|
| Structured option comparison |
|
| Security/risk analysis from multiple angles |
|
| Problem reframing at different levels |
|
| Design review across concerns |
|
| Divergent exploration then convergence |
|
Full reference with examples: Prompts docs
Development
npm run dev # Development with watch mode
npm test # Run all tests
npm run lint # ESLint
npm run typecheck # Type check without emitDocumentation
Topic | Link |
Setup guide (all tools) | |
Full configuration reference | |
Claude Desktop setup | |
All tools with schemas | |
Prompt templates | |
CLI coding agents | |
MCP Bridge | |
Guardrails | |
Docker deployment | |
Provider-specific setup | |
Usage examples | |
Architecture | |
Roadmap |
Troubleshooting
Provider Not Working
Check API key is correctly set
Verify endpoint URL is correct
Run health check:
list_ducks({ check_health: true })Check logs for detailed error messages
Connection Issues
For local providers (Ollama, LM Studio), ensure they're running
Check firewall settings for local endpoints
Verify network connectivity to cloud providers
Rate Limiting
Configure failover to alternate providers
Adjust
max_retriesandtimeoutsettingsSee Guardrails for rate limiting configuration
Contributing
__
<(o )___
( ._> /
`---' Quack! Ready to debug!We love contributions! Whether you're fixing bugs, adding features, or teaching our ducks new tricks, we'd love to have you join the flock.
Check out our Contributing Guide to get started.
Quick start for contributors:
Fork the repository
Create a feature branch
Follow our conventional commit guidelines
Add tests for new functionality
Submit a pull request
License
MIT License - see LICENSE file for details
Acknowledgments
Inspired by the rubber duck debugging method
Built on the Model Context Protocol (MCP)
Uses OpenAI SDK for HTTP provider compatibility
Supports CLI coding agents (Claude Code, Codex, Gemini CLI, Grok, Aider)
Changelog
See CHANGELOG.md for a detailed history of changes and releases.
Registry & Directory
NPM Package: npmjs.com/package/mcp-rubber-duck
Docker Images: ghcr.io/nesquikm/mcp-rubber-duck
MCP Registry: Official MCP server
io.github.nesquikm/rubber-duckGlama Directory: glama.ai/mcp/servers/@nesquikm/mcp-rubber-duck
Awesome MCP Servers: Listed in the community directory
Support
Report issues: https://github.com/nesquikm/mcp-rubber-duck/issues
Documentation: https://github.com/nesquikm/mcp-rubber-duck/wiki
Discussions: https://github.com/nesquikm/mcp-rubber-duck/discussions
Happy Debugging with your AI Duck Panel!
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/nesquikm/mcp-rubber-duck'
If you have feedback or need assistance with the MCP directory API, please join our Discord server