Skip to main content
Glama

SAGE-MCP

by david-strejc
.env.example3.48 kB
# SAGE MCP Server Configuration # Copy to .env and fill in your API keys # ============================================================================= # API KEYS - Configure your AI provider access # ============================================================================= # Google Gemini (get from https://makersuite.google.com/app/apikey) GEMINI_API_KEY=your_gemini_api_key_here # Alternative: Google AI Studio key GOOGLE_API_KEY=your_google_api_key_here # OpenAI (get from https://platform.openai.com/api-keys) OPENAI_API_KEY=your_openai_api_key_here # Anthropic (get from https://console.anthropic.com/) ANTHROPIC_API_KEY=your_anthropic_api_key_here # OpenRouter (get from https://openrouter.ai/keys) OPENROUTER_API_KEY=your_openrouter_api_key_here # X.AI / GROK (get from https://console.x.ai/) XAI_API_KEY=your_xai_api_key_here # Custom/Ollama (for local models) CUSTOM_API_URL=http://localhost:11434 CUSTOM_API_KEY= # Leave empty for Ollama # ============================================================================= # MODEL RESTRICTIONS - Control which models can be used # ============================================================================= # Default model selection (use "auto" for automatic selection) DEFAULT_MODEL=auto # Provider-specific allowed models (comma-separated, leave empty to allow all) OPENAI_ALLOWED_MODELS=o3-mini,gpt-4o-mini GOOGLE_ALLOWED_MODELS=gemini-2.0-flash-exp,gemini-1.5-pro ANTHROPIC_ALLOWED_MODELS=claude-3.5-sonnet OPENROUTER_ALLOWED_MODELS= XAI_ALLOWED_MODELS=grok-3 # Global model blocks (comma-separated) BLOCKED_MODELS=gpt-4,claude-opus DISABLED_MODEL_PATTERNS=expensive,legacy # ============================================================================= # CONVERSATION MEMORY - Multi-turn conversation settings # ============================================================================= # Maximum turns per conversation thread (each turn = user + assistant exchange) MAX_CONVERSATION_TURNS=20 # Conversation timeout in hours (threads expire after this time) CONVERSATION_TIMEOUT_HOURS=3 # Redis configuration for persistent memory (optional) REDIS_URL=redis://localhost:6379 REDIS_DB=0 # ============================================================================= # FILE HANDLING - Control file processing behavior # ============================================================================= # Maximum file size in bytes (default: 10MB) MAX_FILE_SIZE=10000000 # MCP protocol size limits (characters) MCP_PROMPT_SIZE_LIMIT=50000 # ============================================================================= # LOGGING - Control logging behavior # ============================================================================= LOG_LEVEL=INFO LOG_FILE=logs/sage.log # ============================================================================= # ADVANCED SETTINGS - Fine-tune behavior # ============================================================================= # Temperature overrides per mode (optional) TEMPERATURE_CHAT=0.7 TEMPERATURE_ANALYZE=0.3 TEMPERATURE_REVIEW=0.3 TEMPERATURE_DEBUG=0.2 TEMPERATURE_PLAN=0.5 TEMPERATURE_TEST=0.4 TEMPERATURE_REFACTOR=0.4 TEMPERATURE_THINK=0.8 # Token allocation settings TOKEN_RESERVE_RESPONSE=0.3 # Reserve 30% of context for response TOKEN_RESERVE_SYSTEM=0.1 # Reserve 10% of context for system prompts # Security settings ALLOWED_FILE_PATTERNS=*.py,*.js,*.ts,*.md,*.json EXCLUDED_DIR_PATTERNS=node_modules,__pycache__,.git

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/david-strejc/sage-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server