Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| LLM_PROVIDER | No | LLM Provider (openai / ollama / mock) | mock |
| OLLAMA_MODEL | No | The Ollama model to use (required when LLM_PROVIDER=ollama) | llama3 |
| OPENAI_MODEL | No | The OpenAI model to use | gpt-4.1-mini |
| OPENAI_API_KEY | No | Your OpenAI API key (required when LLM_PROVIDER=openai) | |
| OLLAMA_BASE_URL | No | The base URL for Ollama server | http://127.0.0.1:11434 |
| LOW_LATENCY_MODE | No | Enable low latency mode for performance optimization | true |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
No tools | |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |