Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
LLM_MODEL | No | The LLM model to use | local-model |
LLM_API_KEY | No | Your API key for the LLM service (not required for local LLMs) | |
LLM_TIMEOUT | No | Request timeout in seconds | 60 |
LLM_MAX_TOKENS | No | Maximum tokens for LLM response | 600 |
LLM_API_ENDPOINT | No | The LLM API endpoint URL | http://localhost:1234/v1/chat/completions |
CONTENT_MAX_RETRIES | No | Maximum number of content retries | 2 |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
No tools |