Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| OLLAMA_URL | No | The URL for the Ollama backend (e.g., http://localhost:11434). Can be skipped if not using this backend. | http://localhost:11434 |
| CLIPROXYAPI_KEY | No | The local API key/passphrase for CLIProxyAPI, which must match the key defined in its config.yaml. | sk-my-local-key |
| CLIPROXYAPI_URL | No | The URL for the CLIProxyAPI backend (e.g., http://localhost:8317). Can be skipped if not using this backend. | http://localhost:8317 |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| list_models | List all available models across all providers. Run this first to see what you can query. |
| ask_model | Query any AI model with a prompt. Returns the model's response with metadata. |
| compare_models | Query 2-5 models in parallel with the same prompt. Returns side-by-side comparison with latency and token metrics. |
| consensus | Query 3-7 models and aggregate responses using voting strategy (majority/supermajority/unanimous). Returns consensus answer with confidence score. |
| synthesize | Query 2-5 models in parallel, then combine their best ideas into one answer. Returns a synthesized response that's better than any single model. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |