Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| OLLAMA_BASE_URL | No | The Ollama endpoint URL | http://localhost:11434 |
Capabilities
Server capabilities have not been inspected yet.
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| consult_ollama | Consult an Ollama model with a prompt and get its response for reasoning from another viewpoint. If the requested model is unavailable locally, automatically falls back to: cloud models (deepseek-v3.1:671b-cloud, kimi-k2-thinking:cloud) or local alternatives (mistral, llama2). Never fails on model availability. |
| list_ollama_models | List all available Ollama models on the local instance. |
| compare_ollama_models | Run the same prompt against multiple Ollama models and return their outputs side-by-side for comparison. Requested models that are unavailable automatically fall back to cloud models or local alternatives. Handles unavailable models gracefully without breaking the comparison. |
| remember_consult | Store the result of a consult into a local memory store (or configured memory service). |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |