Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
LOG_LEVELNoLog level settinginfo
GROQ_API_KEYNoYour Groq API key
GROQ_NICKNAMENoOptional: defaults to "Groq Duck"Groq Duck
CUSTOM_API_KEYNoYour custom provider API key
GEMINI_API_KEYNoYour Google Gemini API key
OPENAI_API_KEYNoYour OpenAI API key
CUSTOM_BASE_URLNoCustom provider base URL
CUSTOM_NICKNAMENoOptional: defaults to "Custom Duck"Custom Duck
GEMINI_NICKNAMENoOptional: defaults to "Gemini Duck"Gemini Duck
OLLAMA_BASE_URLNoOllama base URLhttp://localhost:11434/v1
OLLAMA_NICKNAMENoOptional: defaults to "Local Duck"Local Duck
OPENAI_NICKNAMENoOptional: defaults to "GPT Duck"GPT Duck
DEFAULT_PROVIDERNoDefault provider to useopenai
TOGETHER_API_KEYNoYour Together AI API key
GROQ_DEFAULT_MODELNoOptional: defaults to llama-3.3-70b-versatilellama-3.3-70b-versatile
DEFAULT_TEMPERATURENoDefault temperature setting0.7
CUSTOM_DEFAULT_MODELNoOptional: defaults to custom-modelcustom-model
GEMINI_DEFAULT_MODELNoOptional: defaults to gemini-2.5-flashgemini-2.5-flash
OLLAMA_DEFAULT_MODELNoOptional: defaults to llama3.2llama3.2
OPENAI_DEFAULT_MODELNoOptional: defaults to gpt-4o-minigpt-4o-mini

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tasks
{
  "list": {},
  "cancel": {},
  "requests": {
    "tools": {
      "call": {}
    }
  }
}
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": true
}
resources
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
ask_duck

Ask a question to a specific LLM provider (duck)

chat_with_duck

Have a conversation with a duck, maintaining context across messages

clear_conversations

Clear all conversation history and start fresh

list_ducks

List all available LLM providers (ducks) and their status

list_models

List available models for LLM providers

compare_ducks

Ask the same question to multiple ducks simultaneously

duck_council

Get responses from all configured ducks (like a panel discussion)

duck_vote

Have multiple ducks vote on options with reasoning. Returns vote tally, confidence scores, and consensus level.

duck_judge

Have one duck evaluate and rank other ducks' responses. Use after duck_council to get a comparative evaluation.

duck_iterate

Iteratively refine a response between two ducks. One generates, the other critiques/improves, alternating for multiple rounds.

duck_debate

Structured multi-round debate between ducks. Supports oxford (pro/con), socratic (questioning), and adversarial (attack/defend) formats.

get_usage_stats

Get usage statistics for a time period. Shows token counts and costs (when pricing configured).

Prompts

Interactive templates invoked by user choice

NameDescription
perspectivesAnalyze a problem from multiple perspectives. Each LLM adopts a different analytical lens (e.g., security, performance, UX) for comprehensive multi-angle analysis.
assumptionsSurface and challenge hidden assumptions in a plan, design, or idea. Identifies implicit premises that could be risky if wrong.
blindspotsHunt for missing considerations, overlooked risks, and gaps in a proposal. Acts as a panel of critical reviewers looking for what might be underweighted.
tradeoffsCompare options with explicit criteria and trade-off analysis. Provides structured evaluation to help make informed decisions.
red_teamConduct attack surface analysis from multiple angles. Each reviewer focuses on different risk dimensions (security, privacy, abuse, compliance).
reframeReframe a problem from multiple angles and abstraction levels. Helps break out of mental ruts by viewing the problem differently.
architectureStructured architecture or design review from multiple engineering perspectives. Each reviewer focuses on different cross-cutting concerns.
diverge_convergeStructure divergent thinking (explore many options) followed by convergence (evaluate and select). Maximizes creative exploration before narrowing down.

Resources

Contextual data attached and managed by the client

NameDescription
Compare DucksInteractive UI for Compare Ducks
Duck VoteInteractive UI for Duck Vote
Duck DebateInteractive UI for Duck Debate
Usage StatsInteractive UI for Usage Stats

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nesquikm/mcp-rubber-duck'

If you have feedback or need assistance with the MCP directory API, please join our Discord server