Skip to main content
Glama

mcp-rubber-duck

.env.desktop.example4.85 kB
# MCP Rubber Duck - Desktop/Server Configuration Example # Optimized for high-performance systems (macOS, Linux, Windows) # Copy this file to .env and add your API keys # ============================================================================= # BASIC CONFIGURATION # ============================================================================= # Docker image (multi-platform) DOCKER_IMAGE=ghcr.io/nesquikm/mcp-rubber-duck:latest # Default provider and settings DEFAULT_PROVIDER=openai DEFAULT_TEMPERATURE=0.7 LOG_LEVEL=info # ============================================================================= # DESKTOP/SERVER OPTIMIZATIONS # ============================================================================= # Resource limits for powerful systems DOCKER_CPU_LIMIT=4.0 DOCKER_MEMORY_LIMIT=2G DOCKER_MEMORY_RESERVATION=1G # Node.js memory optimization for desktop NODE_OPTIONS=--max-old-space-size=1024 NODE_ENV=production # ============================================================================= # MCP SERVER CONFIGURATION # ============================================================================= # Enable MCP server mode MCP_SERVER=true # ============================================================================= # AI PROVIDER API KEYS # ============================================================================= # OpenAI (required - get from https://platform.openai.com/api-keys) OPENAI_API_KEY=sk-your-openai-key-here OPENAI_DEFAULT_MODEL=gpt-4o # Google Gemini (optional - get from https://aistudio.google.com/apikey) GEMINI_API_KEY=your-gemini-key-here GEMINI_DEFAULT_MODEL=gemini-2.5-pro # Groq (optional - fast inference - get from https://console.groq.com/keys) GROQ_API_KEY=gsk_your-groq-key-here GROQ_DEFAULT_MODEL=llama-3.3-70b-versatile # Together AI (optional - get from https://api.together.xyz/) TOGETHER_API_KEY=your-together-key-here # Perplexity AI (optional - get from https://perplexity.ai/) PERPLEXITY_API_KEY=your-perplexity-key-here # Anyscale (optional - get from https://app.endpoints.anyscale.com/) # ANYSCALE_API_KEY=your-anyscale-key-here # Azure OpenAI (optional) # AZURE_OPENAI_API_KEY=your-azure-key-here # AZURE_OPENAI_ENDPOINT=your-resource-name.openai.azure.com # ============================================================================= # LOCAL AI (RECOMMENDED FOR DESKTOP) # ============================================================================= # Ollama (local AI - enable with --profile with-ollama) # Desktop systems can handle larger models OLLAMA_BASE_URL=http://ollama:11434/v1 OLLAMA_DEFAULT_MODEL=llama3.1:8b # Ollama resource limits for desktop OLLAMA_CPU_LIMIT=4.0 OLLAMA_MEMORY_LIMIT=4G OLLAMA_MEMORY_RESERVATION=2G # LM Studio (alternative local AI) # LMSTUDIO_BASE_URL=http://localhost:1234/v1 # LMSTUDIO_DEFAULT_MODEL=local-model # ============================================================================= # CUSTOM PROVIDERS # ============================================================================= # Example: Custom OpenAI-compatible API # CUSTOM_API_KEY=your-custom-key # CUSTOM_BASE_URL=https://api.example.com/v1 # CUSTOM_DEFAULT_MODEL=custom-model # CUSTOM_NICKNAME=My Custom Duck # Example: Additional custom provider # CUSTOM_MYAPI_API_KEY=your-custom-key # CUSTOM_MYAPI_BASE_URL=https://api.example.com/v1 # CUSTOM_MYAPI_DEFAULT_MODEL=custom-model # CUSTOM_MYAPI_NICKNAME=My API Duck # ============================================================================= # MCP BRIDGE CONFIGURATION (ADVANCED) # ============================================================================= # Enable MCP Bridge (allows ducks to use external MCP tools) MCP_BRIDGE_ENABLED=true # Approval mode: always, trusted, or never MCP_APPROVAL_MODE=trusted MCP_APPROVAL_TIMEOUT=300 # Example: Context7 Documentation Server MCP_SERVER_CONTEXT7_TYPE=http MCP_SERVER_CONTEXT7_URL=https://mcp.context7.com/mcp MCP_SERVER_CONTEXT7_ENABLED=true MCP_TRUSTED_TOOLS_CONTEXT7=* # ============================================================================= # PERFORMANCE TUNING FOR DESKTOP # ============================================================================= # Cache settings (higher for desktop) CACHE_TTL=600 # Network settings ENABLE_FAILOVER=true MAX_RETRIES=3 REQUEST_TIMEOUT=60000 # Monitoring (enable for development/debugging) ENABLE_PERFORMANCE_MONITORING=true ENABLE_REQUEST_LOGGING=false ENABLE_MEMORY_REPORTING=true # ============================================================================= # DEVELOPMENT SETTINGS # ============================================================================= # Enable debug mode for development # LOG_LEVEL=debug # Duck nicknames (optional fun customization) # OPENAI_NICKNAME=DUCK-4 # GEMINI_NICKNAME=Duckmini # GROQ_NICKNAME=Quackers # OLLAMA_NICKNAME=Local Quacker

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nesquikm/mcp-rubber-duck'

If you have feedback or need assistance with the MCP directory API, please join our Discord server