Multi-Model Advisor
by YuChenSSR
Verified
Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
DEBUG | No | Enable debug mode | true |
SERVER_NAME | No | The name of the server | multi-model-advisor |
DEFAULT_MODELS | No | Comma-separated list of default Ollama models to use | gemma3:1b,llama3.2:1b,deepseek-r1:1.5b |
OLLAMA_API_URL | No | URL of the Ollama API endpoint | http://localhost:11434 |
SERVER_VERSION | No | The version of the server | 1.0.0 |
GEMMA_SYSTEM_PROMPT | No | System prompt for the Gemma model | You are a supportive and empathetic AI assistant focused on human well-being. Provide considerate and balanced advice. |
LLAMA_SYSTEM_PROMPT | No | System prompt for the Llama model | You are a logical and analytical AI assistant. Think step-by-step and explain your reasoning clearly. |
DEEPSEEK_SYSTEM_PROMPT | No | System prompt for the Deepseek model | You are a creative and innovative AI assistant. Think outside the box and offer novel perspectives. |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
list-available-models | List all available models in Ollama that can be used with query-models |
query-models | Query multiple AI models via Ollama and get their responses to compare perspectives |