Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
HF_TOKEN | No | HuggingFace API token (legacy, unused) | |
CHUNK_SIZE | No | Characters per chunk for AI processing | 2000 |
MAX_CHUNKS | No | Maximum chunks to process (0=unlimited) | 0 |
LOCAL_MODEL | No | Local model identifier | llama3.2:latest |
CHUNK_OVERLAP | No | Overlap between chunks for context | 200 |
MODEL_PROVIDER | No | AI model provider to use | ollama |
OLLAMA_BASE_URL | No | Ollama API endpoint | http://localhost:11434 |
LMSTUDIO_BASE_URL | No | LM Studio API endpoint | http://localhost:1234 |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
No tools |