Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| LOG_LEVEL | No | Log level setting | info |
| GROQ_API_KEY | No | Your Groq API key | |
| GROQ_NICKNAME | No | Optional: defaults to "Groq Duck" | Groq Duck |
| CUSTOM_API_KEY | No | Your custom provider API key | |
| GEMINI_API_KEY | No | Your Google Gemini API key | |
| OPENAI_API_KEY | No | Your OpenAI API key | |
| CUSTOM_BASE_URL | No | Custom provider base URL | |
| CUSTOM_NICKNAME | No | Optional: defaults to "Custom Duck" | Custom Duck |
| GEMINI_NICKNAME | No | Optional: defaults to "Gemini Duck" | Gemini Duck |
| OLLAMA_BASE_URL | No | Ollama base URL | http://localhost:11434/v1 |
| OLLAMA_NICKNAME | No | Optional: defaults to "Local Duck" | Local Duck |
| OPENAI_NICKNAME | No | Optional: defaults to "GPT Duck" | GPT Duck |
| DEFAULT_PROVIDER | No | Default provider to use | openai |
| TOGETHER_API_KEY | No | Your Together AI API key | |
| GROQ_DEFAULT_MODEL | No | Optional: defaults to llama-3.3-70b-versatile | llama-3.3-70b-versatile |
| DEFAULT_TEMPERATURE | No | Default temperature setting | 0.7 |
| CUSTOM_DEFAULT_MODEL | No | Optional: defaults to custom-model | custom-model |
| GEMINI_DEFAULT_MODEL | No | Optional: defaults to gemini-2.5-flash | gemini-2.5-flash |
| OLLAMA_DEFAULT_MODEL | No | Optional: defaults to llama3.2 | llama3.2 |
| OPENAI_DEFAULT_MODEL | No | Optional: defaults to gpt-4o-mini | gpt-4o-mini |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| ask_duck | Ask a question to a specific LLM provider (duck) |
| chat_with_duck | Have a conversation with a duck, maintaining context across messages |
| clear_conversations | Clear all conversation history and start fresh |
| list_ducks | List all available LLM providers (ducks) and their status |
| list_models | List available models for LLM providers |
| compare_ducks | Ask the same question to multiple ducks simultaneously |
| duck_council | Get responses from all configured ducks (like a panel discussion) |
| duck_vote | Have multiple ducks vote on options with reasoning. Returns vote tally, confidence scores, and consensus level. |
| duck_judge | Have one duck evaluate and rank other ducks' responses. Use after duck_council to get a comparative evaluation. |
| duck_iterate | Iteratively refine a response between two ducks. One generates, the other critiques/improves, alternating for multiple rounds. |
| duck_debate | Structured multi-round debate between ducks. Supports oxford (pro/con), socratic (questioning), and adversarial (attack/defend) formats. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |