Skip to main content
Glama

ACE MCP Server

.env.exampleโ€ข5.09 kB
# =================================== # LLM Provider Configuration # =================================== # Provider selection: 'deepseek', 'openai', 'anthropic', 'gemini', 'mistral', 'lmstudio' # RECOMMENDED: 'deepseek' (best performance for ACE framework) LLM_PROVIDER=deepseek # ----------------------------------- # OpenAI Configuration # ----------------------------------- # Required if LLM_PROVIDER=openai OPENAI_API_KEY=sk-your-openai-api-key-here OPENAI_MODEL=gpt-4o OPENAI_EMBEDDING_MODEL=text-embedding-3-small OPENAI_TIMEOUT=30000 OPENAI_MAX_RETRIES=3 # ----------------------------------- # DeepSeek Configuration (RECOMMENDED) # ----------------------------------- # DeepSeek V3.2-Exp - Best results for ACE framework # Pricing: $0.28/1M input tokens, $0.42/1M output tokens # Context: 128K tokens, Max output: 32K (reasoner mode) # Required if LLM_PROVIDER=deepseek DEEPSEEK_API_KEY=sk-your-deepseek-api-key-here DEEPSEEK_MODEL=deepseek-chat DEEPSEEK_EMBEDDING_MODEL=deepseek-embedding DEEPSEEK_TIMEOUT=30000 DEEPSEEK_MAX_RETRIES=3 # ----------------------------------- # Anthropic Configuration # ----------------------------------- # Required if LLM_PROVIDER=anthropic ANTHROPIC_API_KEY=sk-ant-your-api-key-here ANTHROPIC_MODEL=claude-3-5-sonnet-20241022 ANTHROPIC_TIMEOUT=30000 ANTHROPIC_MAX_RETRIES=3 # ----------------------------------- # Google Gemini Configuration # ----------------------------------- # Required if LLM_PROVIDER=google GOOGLE_API_KEY=your-google-api-key-here GOOGLE_MODEL=gemini-1.5-pro GOOGLE_TIMEOUT=30000 GOOGLE_MAX_RETRIES=3 # ----------------------------------- # Mistral Configuration # ----------------------------------- # Required if LLM_PROVIDER=mistral MISTRAL_API_KEY=your-mistral-api-key-here MISTRAL_MODEL=mistral-large-latest MISTRAL_TIMEOUT=30000 MISTRAL_MAX_RETRIES=3 # ----------------------------------- # LM Studio Configuration (Local) # ----------------------------------- # For local/self-hosted models # Required if LLM_PROVIDER=lmstudio LMSTUDIO_BASE_URL=http://localhost:1234/v1 LMSTUDIO_MODEL=local-model LMSTUDIO_TIMEOUT=60000 LMSTUDIO_MAX_RETRIES=3 # =================================== # ACE Framework Configuration # =================================== # Context storage directory ACE_CONTEXT_DIR=./contexts # Logging level (error, warn, info, debug) ACE_LOG_LEVEL=info # Deduplication similarity threshold (0.0 - 1.0) ACE_DEDUP_THRESHOLD=0.85 # Maximum bullets per context ACE_MAX_PLAYBOOK_SIZE=10000 # Maximum reflector iteration rounds ACE_MAX_REFLECTOR_ITERATIONS=5 # Generator configuration ACE_GENERATOR_TEMPERATURE=0.7 ACE_GENERATOR_MAX_TOKENS=2000 ACE_GENERATOR_MAX_BULLETS=20 # Reflector configuration ACE_REFLECTOR_QUALITY_THRESHOLD=0.7 ACE_REFLECTOR_THINKING_MODE=false # Curator configuration ACE_CURATOR_ENABLE_DEDUPLICATION=true ACE_CURATOR_MIN_CONFIDENCE=0.6 # =================================== # Docker Configuration # =================================== # Node environment NODE_ENV=production # Server ports (Docker range: 34300-34400) ACE_SERVER_PORT=34301 DASHBOARD_PORT=34300 # Health check configuration HEALTH_CHECK_INTERVAL=30s HEALTH_CHECK_TIMEOUT=10s HEALTH_CHECK_RETRIES=3 # Docker networking DOCKER_NETWORK_NAME=ace_network # Volume configuration CONTEXTS_VOLUME=ace_contexts LOGS_VOLUME=ace_logs # =================================== # Development Configuration # =================================== # Development mode settings (only for docker-compose.dev.yml) DEV_HOT_RELOAD=true DEV_LOG_LEVEL=debug DEV_DASHBOARD_PORT=34300 # =================================== # Security Configuration # =================================== # API Bearer Token Authentication # IMPORTANT: Generate a secure random token for production # Example: openssl rand -hex 32 # This token is required for all API/MCP requests (except index page and /health) API_BEARER_TOKEN=your-secure-bearer-token-here-change-in-production # Optional: Additional security settings # JWT_SECRET=your-jwt-secret-here # CORS_ORIGIN=https://your-domain.com # =================================== # Monitoring Configuration # =================================== # Performance monitoring ENABLE_METRICS=true METRICS_PORT=34302 # Error tracking # SENTRY_DSN=your-sentry-dsn-here # =================================== # Backup Configuration # =================================== # Automatic backup settings ENABLE_AUTO_BACKUP=true BACKUP_INTERVAL_HOURS=24 BACKUP_RETENTION_DAYS=30 BACKUP_DIRECTORY=./backup # =================================== # Production Deployment # =================================== # Server configuration # Set your production server IP address PRODUCTION_SERVER_IP=10.20.30.40 PRODUCTION_DOMAIN=your-mcp-domain.com # SSL/TLS (handled by Cloudflare) # Cloudflare SSL mode: Full (not Full Strict) # No local SSL certificates needed # =================================== # SSH Configuration (for deployment) # =================================== # SSH key path for deployment PRODUCTION_SSH_KEY=~/.ssh/id_ed25519 # SSH user for deployment PRODUCTION_SERVER_USER=root

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Angry-Robot-Deals/ace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server