mcp-rubber-duck
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| LOG_LEVEL | No | Log level setting | info |
| GROQ_API_KEY | No | Your Groq API key | |
| GROQ_NICKNAME | No | Optional: defaults to "Groq Duck" | Groq Duck |
| CUSTOM_API_KEY | No | Your custom provider API key | |
| GEMINI_API_KEY | No | Your Google Gemini API key | |
| OPENAI_API_KEY | No | Your OpenAI API key | |
| CUSTOM_BASE_URL | No | Custom provider base URL | |
| CUSTOM_NICKNAME | No | Optional: defaults to "Custom Duck" | Custom Duck |
| GEMINI_NICKNAME | No | Optional: defaults to "Gemini Duck" | Gemini Duck |
| OLLAMA_BASE_URL | No | Ollama base URL | http://localhost:11434/v1 |
| OLLAMA_NICKNAME | No | Optional: defaults to "Local Duck" | Local Duck |
| OPENAI_NICKNAME | No | Optional: defaults to "GPT Duck" | GPT Duck |
| DEFAULT_PROVIDER | No | Default provider to use | openai |
| TOGETHER_API_KEY | No | Your Together AI API key | |
| GROQ_DEFAULT_MODEL | No | Optional: defaults to llama-3.3-70b-versatile | llama-3.3-70b-versatile |
| DEFAULT_TEMPERATURE | No | Default temperature setting | 0.7 |
| CUSTOM_DEFAULT_MODEL | No | Optional: defaults to custom-model | custom-model |
| GEMINI_DEFAULT_MODEL | No | Optional: defaults to gemini-2.5-flash | gemini-2.5-flash |
| OLLAMA_DEFAULT_MODEL | No | Optional: defaults to llama3.2 | llama3.2 |
| OPENAI_DEFAULT_MODEL | No | Optional: defaults to gpt-4o-mini | gpt-4o-mini |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tasks | {
"list": {},
"cancel": {},
"requests": {
"tools": {
"call": {}
}
}
} |
| tools | {
"listChanged": true
} |
| prompts | {
"listChanged": true
} |
| resources | {
"listChanged": true
} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| ask_duck | Ask a question to a specific LLM provider (duck) |
| chat_with_duck | Have a conversation with a duck, maintaining context across messages |
| clear_conversations | Clear all conversation history and start fresh |
| list_ducks | List all available LLM providers (ducks) and their status |
| list_models | List available models for LLM providers |
| compare_ducks | Ask the same question to multiple ducks simultaneously |
| duck_council | Get responses from all configured ducks (like a panel discussion) |
| duck_vote | Have multiple ducks vote on options with reasoning. Returns vote tally, confidence scores, and consensus level. |
| duck_judge | Have one duck evaluate and rank other ducks' responses. Use after duck_council to get a comparative evaluation. |
| duck_iterate | Iteratively refine a response between two ducks. One generates, the other critiques/improves, alternating for multiple rounds. |
| duck_debate | Structured multi-round debate between ducks. Supports oxford (pro/con), socratic (questioning), and adversarial (attack/defend) formats. |
| get_usage_stats | Get usage statistics for a time period. Shows token counts and costs (when pricing configured). |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| perspectives | Analyze a problem from multiple perspectives. Each LLM adopts a different analytical lens (e.g., security, performance, UX) for comprehensive multi-angle analysis. |
| assumptions | Surface and challenge hidden assumptions in a plan, design, or idea. Identifies implicit premises that could be risky if wrong. |
| blindspots | Hunt for missing considerations, overlooked risks, and gaps in a proposal. Acts as a panel of critical reviewers looking for what might be underweighted. |
| tradeoffs | Compare options with explicit criteria and trade-off analysis. Provides structured evaluation to help make informed decisions. |
| red_team | Conduct attack surface analysis from multiple angles. Each reviewer focuses on different risk dimensions (security, privacy, abuse, compliance). |
| reframe | Reframe a problem from multiple angles and abstraction levels. Helps break out of mental ruts by viewing the problem differently. |
| architecture | Structured architecture or design review from multiple engineering perspectives. Each reviewer focuses on different cross-cutting concerns. |
| diverge_converge | Structure divergent thinking (explore many options) followed by convergence (evaluate and select). Maximizes creative exploration before narrowing down. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| Compare Ducks | Interactive UI for Compare Ducks |
| Duck Vote | Interactive UI for Duck Vote |
| Duck Debate | Interactive UI for Duck Debate |
| Usage Stats | Interactive UI for Usage Stats |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/nesquikm/mcp-rubber-duck'
If you have feedback or need assistance with the MCP directory API, please join our Discord server