Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
LOG_LEVELNoLog level settinginfo
GROQ_API_KEYNoYour Groq API key
GROQ_NICKNAMENoOptional: defaults to "Groq Duck"Groq Duck
CUSTOM_API_KEYNoYour custom provider API key
GEMINI_API_KEYNoYour Google Gemini API key
OPENAI_API_KEYNoYour OpenAI API key
CUSTOM_BASE_URLNoCustom provider base URL
CUSTOM_NICKNAMENoOptional: defaults to "Custom Duck"Custom Duck
GEMINI_NICKNAMENoOptional: defaults to "Gemini Duck"Gemini Duck
OLLAMA_BASE_URLNoOllama base URLhttp://localhost:11434/v1
OLLAMA_NICKNAMENoOptional: defaults to "Local Duck"Local Duck
OPENAI_NICKNAMENoOptional: defaults to "GPT Duck"GPT Duck
DEFAULT_PROVIDERNoDefault provider to useopenai
TOGETHER_API_KEYNoYour Together AI API key
GROQ_DEFAULT_MODELNoOptional: defaults to llama-3.3-70b-versatilellama-3.3-70b-versatile
DEFAULT_TEMPERATURENoDefault temperature setting0.7
CUSTOM_DEFAULT_MODELNoOptional: defaults to custom-modelcustom-model
GEMINI_DEFAULT_MODELNoOptional: defaults to gemini-2.5-flashgemini-2.5-flash
OLLAMA_DEFAULT_MODELNoOptional: defaults to llama3.2llama3.2
OPENAI_DEFAULT_MODELNoOptional: defaults to gpt-4o-minigpt-4o-mini

Tools

Functions exposed to the LLM to take actions

NameDescription
ask_duck

Ask a question to a specific LLM provider (duck)

chat_with_duck

Have a conversation with a duck, maintaining context across messages

clear_conversations

Clear all conversation history and start fresh

list_ducks

List all available LLM providers (ducks) and their status

list_models

List available models for LLM providers

compare_ducks

Ask the same question to multiple ducks simultaneously

duck_council

Get responses from all configured ducks (like a panel discussion)

duck_vote

Have multiple ducks vote on options with reasoning. Returns vote tally, confidence scores, and consensus level.

duck_judge

Have one duck evaluate and rank other ducks' responses. Use after duck_council to get a comparative evaluation.

duck_iterate

Iteratively refine a response between two ducks. One generates, the other critiques/improves, alternating for multiple rounds.

duck_debate

Structured multi-round debate between ducks. Supports oxford (pro/con), socratic (questioning), and adversarial (attack/defend) formats.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nesquikm/mcp-rubber-duck'

If you have feedback or need assistance with the MCP directory API, please join our Discord server