Skip to main content
Glama
.env.example4.31 kB
# AI MCP Gateway - Environment Variables # Copy this file to .env and customize with your values # ============================================ # MCP Server Configuration # ============================================ MCP_SERVER_NAME=mcp-gateway MCP_SERVER_VERSION=0.1.0 # ============================================ # LLM Provider API Keys # ============================================ # REQUIRED: At least one provider API key # You have OpenRouter - this enables L1 layer OPENROUTER_API_KEY=your_openrouter_key_here # Optional: Add these to enable more layers # L1, L2: Uncomment to use OpenAI directly (or use via OpenRouter) # OPENAI_API_KEY=sk-your-openai-key-here # L2, L3: Uncomment to use Anthropic Claude models # ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here # NOTE: With only OpenRouter key, you can still use all models via OpenRouter proxy # OpenRouter supports: OpenAI models, Anthropic models, Google models, etc. # Just use the OpenRouter model names in your requests # ============================================ # OpenRouter Configuration # ============================================ # Free models for L0 layer (fallback) OPENROUTER_FALLBACK_MODELS=meta-llama/llama-3.3-70b-instruct:free,x-ai/grok-4.1-fast:free # Use OpenRouter to proxy OpenAI/Claude models (if you don't have direct API keys) OPENROUTER_REPLACE_OPENAI=openai/gpt-4o-mini OPENROUTER_REPLACE_CLAUDE=anthropic/claude-3.5-sonnet # Layer override - customize models per layer using OpenRouter # Format: LAYER_L0_MODELS, LAYER_L1_MODELS, etc. LAYER_L0_MODELS=meta-llama/llama-3.3-70b-instruct:free,x-ai/grok-4.1-fast:free LAYER_L1_MODELS=google/gemini-flash-1.5,openai/gpt-4o-mini LAYER_L2_MODELS=anthropic/claude-3-haiku,openai/gpt-4o LAYER_L3_MODELS=anthropic/claude-3.5-sonnet,openai/o1-preview # ============================================ # Task-Specific Model Configuration # ============================================ # Models for different tasks (comma-separated) # Chat - General conversation CHAT_MODELS=meta-llama/llama-3.3-70b-instruct:free,google/gemini-flash-1.5 # Code - Code generation and analysis (prefer models with 'code' or 'coder' in name) CODE_MODELS=qwen/qwen-2.5-coder-32b-instruct:free,deepseek/deepseek-coder-33b-instruct:free # Analyze - Code review and analysis ANALYZE_MODELS=x-ai/grok-4.1-fast:free,anthropic/claude-3-haiku # Create Project - Full project scaffolding CREATE_PROJECT_MODELS=qwen/qwen-2.5-coder-32b-instruct:free,openai/gpt-4o-mini # ============================================ # OSS/Local Model (Ollama) # ============================================ # Set to true if using Ollama (requires: docker-compose --profile with-ollama up) OSS_MODEL_ENABLED=false OSS_MODEL_ENDPOINT=http://localhost:11434 OSS_MODEL_NAME=llama3:8b # ============================================ # Redis Configuration # ============================================ REDIS_HOST=localhost REDIS_PORT=6379 REDIS_PASSWORD= REDIS_DB=0 # ============================================ # PostgreSQL Database Configuration # ============================================ DATABASE_URL= DB_HOST=localhost DB_PORT=5432 DB_NAME=ai_mcp_gateway DB_USER=postgres DB_PASSWORD=your_secure_postgres_password_here DB_SSL=false # ============================================ # HTTP API Configuration # ============================================ API_PORT=3000 API_HOST=0.0.0.0 API_CORS_ORIGIN=* # ============================================ # Logging # ============================================ LOG_LEVEL=info LOG_FILE=logs/mcp-gateway.log # ============================================ # Routing Configuration # ============================================ DEFAULT_LAYER=L0 ENABLE_CROSS_CHECK=true ENABLE_AUTO_ESCALATE=false MAX_ESCALATION_LAYER=L0 # ============================================ # Layer Enable/Disable Control # ============================================ LAYER_L0_ENABLED=true LAYER_L1_ENABLED=true LAYER_L2_ENABLED=true LAYER_L3_ENABLED=true # ============================================ # Cost Tracking # ============================================ ENABLE_COST_TRACKING=true COST_ALERT_THRESHOLD=1.00 # ============================================ # Mode Configuration # ============================================ # Mode: mcp (stdio) or api (HTTP server) MODE=mcp

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/babasida246/ai-mcp-gateway'

If you have feedback or need assistance with the MCP directory API, please join our Discord server