Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
REDIS_DB | No | Redis database number | 0 |
REDIS_URL | No | Redis connection for memory | redis://localhost:6379/0 |
XAI_API_KEY | No | xAI (Grok) API key | |
DEFAULT_MODEL | No | Default model (auto for automatic selection) | auto |
MAX_FILE_SIZE | No | Maximum file size in bytes (5MB) | 5242880 |
ALLOWED_MODELS | No | Comma-separated list of allowed models | |
BLOCKED_MODELS | No | Blocked models (any provider) | |
CUSTOM_API_KEY | No | Custom API key (if required) | |
CUSTOM_API_URL | No | Custom/Ollama API endpoint | |
FALLBACK_MODEL | No | Fallback model for errors | |
GEMINI_API_KEY | No | Google Gemini API key | |
GOOGLE_API_KEY | No | Google Gemini API key (alternative name) | |
OPENAI_API_KEY | No | OpenAI API key | |
DEFAULT_PROVIDER | No | Default AI provider | auto |
MAX_TOKENS_GPT4O | No | Maximum tokens for GPT-4O | 128000 |
TEMPERATURE_CHAT | No | Chat mode temperature | 0.5 |
TEMPERATURE_PLAN | No | Plan mode temperature | 0.4 |
TEMPERATURE_TEST | No | Test mode temperature | 0.2 |
ANTHROPIC_API_KEY | No | Anthropic Claude API key | |
DISALLOWED_MODELS | No | Comma-separated list of disallowed models | |
MAX_TOKENS_CLAUDE | No | Maximum tokens for Claude | 200000 |
TEMPERATURE_DEBUG | No | Debug mode temperature | 0.1 |
TEMPERATURE_THINK | No | Think mode temperature | 0.7 |
WEBSEARCH_ENABLED | No | Enable web search | true |
OPENROUTER_API_KEY | No | OpenRouter API key | |
TEMPERATURE_REVIEW | No | Review mode temperature | 0.3 |
FILE_SECURITY_CHECK | No | Validate file paths | true |
TEMPERATURE_ANALYZE | No | Analyze mode temperature | 0.2 |
AUTO_MODEL_SELECTION | No | Smart model selection | true |
TEMPERATURE_REFACTOR | No | Refactor mode temperature | 0.3 |
GOOGLE_ALLOWED_MODELS | No | Allowed Google models | |
MCP_PROMPT_SIZE_LIMIT | No | MCP transport limit | 50000 |
OPENAI_ALLOWED_MODELS | No | Allowed OpenAI models | |
MAX_CONVERSATION_TURNS | No | Maximum turns per conversation | 20 |
MAX_THINKING_TOKENS_O1 | No | Maximum thinking tokens for O1 | 100000 |
DISABLED_MODEL_PATTERNS | No | Disable models by pattern | |
ANTHROPIC_ALLOWED_MODELS | No | Allowed Anthropic models | |
CONVERSATION_TIMEOUT_HOURS | No | Conversation timeout in hours | 3 |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
No tools |