Skip to main content
Glama

mcp-rubber-duck

.env.pi.example3.53 kB
# MCP Rubber Duck - Raspberry Pi Configuration Example # Optimized for low-memory devices (Pi 3+) # Copy this file to .env and add your API keys # ============================================================================= # BASIC CONFIGURATION # ============================================================================= # Docker image (multi-platform) DOCKER_IMAGE=ghcr.io/nesquikm/mcp-rubber-duck:latest # Default provider and settings DEFAULT_PROVIDER=openai DEFAULT_TEMPERATURE=0.7 LOG_LEVEL=info # ============================================================================= # RASPBERRY PI OPTIMIZATIONS # ============================================================================= # Resource limits optimized for Pi DOCKER_CPU_LIMIT=1.5 DOCKER_MEMORY_LIMIT=512M DOCKER_MEMORY_RESERVATION=256M # Node.js memory optimization for Pi NODE_OPTIONS=--max-old-space-size=256 NODE_ENV=production # ============================================================================= # MCP SERVER CONFIGURATION # ============================================================================= # Enable MCP server mode MCP_SERVER=true # ============================================================================= # AI PROVIDER API KEYS # ============================================================================= # OpenAI (required - get from https://platform.openai.com/api-keys) OPENAI_API_KEY=sk-your-openai-key-here OPENAI_DEFAULT_MODEL=gpt-4o-mini # Google Gemini (optional - get from https://aistudio.google.com/apikey) GEMINI_API_KEY=your-gemini-key-here GEMINI_DEFAULT_MODEL=gemini-2.5-flash # Groq (optional - fast inference - get from https://console.groq.com/keys) GROQ_API_KEY=gsk_your-groq-key-here GROQ_DEFAULT_MODEL=llama-3.3-70b-versatile # Other providers (optional) # TOGETHER_API_KEY=your-together-key-here # PERPLEXITY_API_KEY=your-perplexity-key-here # ============================================================================= # LOCAL AI (OPTIONAL) # ============================================================================= # Ollama (local AI - enable with --profile with-ollama) # Warning: Ollama requires significant RAM on Pi # OLLAMA_BASE_URL=http://ollama:11434/v1 # OLLAMA_DEFAULT_MODEL=llama3.2 # Ollama resource limits for Pi (if enabled) OLLAMA_CPU_LIMIT=2.0 OLLAMA_MEMORY_LIMIT=1G OLLAMA_MEMORY_RESERVATION=512M # ============================================================================= # CUSTOM PROVIDERS # ============================================================================= # Example: Custom OpenAI-compatible API # CUSTOM_API_KEY=your-custom-key # CUSTOM_BASE_URL=https://api.example.com/v1 # CUSTOM_DEFAULT_MODEL=custom-model # ============================================================================= # MCP BRIDGE CONFIGURATION (OPTIONAL) # ============================================================================= # Enable MCP Bridge (allows ducks to use external MCP tools) MCP_BRIDGE_ENABLED=false # Approval mode: always, trusted, or never MCP_APPROVAL_MODE=trusted MCP_APPROVAL_TIMEOUT=300 # ============================================================================= # PERFORMANCE TUNING FOR PI # ============================================================================= # Cache settings (lower for Pi) CACHE_TTL=300 # Network settings ENABLE_FAILOVER=true MAX_RETRIES=3 REQUEST_TIMEOUT=30000 # Monitoring (disable to save resources) ENABLE_PERFORMANCE_MONITORING=false ENABLE_REQUEST_LOGGING=false ENABLE_MEMORY_REPORTING=true

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nesquikm/mcp-rubber-duck'

If you have feedback or need assistance with the MCP directory API, please join our Discord server