Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| API_KEY | Yes | API key for the selected provider | |
| MODEL_ID | No | Specific model to use (defaults to provider's standard model) | |
| PROVIDER | Yes | AI provider to use. Supported values: ANTHROPIC, OPENAI, OPENAI-COMPATIBLE, GOOGLE | |
| MAX_TOKENS | No | Maximum tokens for model responses | 1024 |
| MCP_WORKING_DIR | No | Fallback directory for trying to find files with relative paths from | |
| PROVIDER_BASE_URL | No | Custom API endpoint for OpenAI-compatible providers | |
| SUMMARIZATION_CACHE_MAX_AGE | No | Cache duration in milliseconds | 3600000 |
| SUMMARIZATION_CHAR_THRESHOLD | No | Character count threshold for when to summarize | 512 |
Schema
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
No tools | |