Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
BATCH_SIZE | No | Batch size for indexing | |
CHROMA_HOST | No | The host for ChromaDB | |
CHROMA_PORT | No | The port for ChromaDB | |
OLLAMA_HOST | No | The host URL for Ollama service | |
COMPANY_NAME | No | Your company name | |
OLLAMA_MODEL | No | The Ollama model to use for embeddings | |
OPENAI_MODEL | No | The OpenAI model to use for embeddings | |
MAX_FILE_SIZE | No | Maximum file size in KB | |
MAX_CHUNK_SIZE | No | Maximum chunk size in characters | |
OPENAI_API_KEY | No | Your OpenAI API key | |
CHROMA_SERVER_HOST | No | ChromaDB server host for restricting access | |
EMBEDDING_PROVIDER | No | The embedding provider to use (ollama or openai) |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
index_local_project | Index a local project directory into the vector database |
search_codebase | Search the indexed codebase using semantic search |
list_indexed_projects | List all projects currently indexed |
get_embedding_provider_info | Get information about the current embedding provider |