Multi-Model Advisor

by YuChenSSR
Verified

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
DEBUGNoEnable debug modetrue
SERVER_NAMENoThe name of the servermulti-model-advisor
DEFAULT_MODELSNoComma-separated list of default Ollama models to usegemma3:1b,llama3.2:1b,deepseek-r1:1.5b
OLLAMA_API_URLNoURL of the Ollama API endpointhttp://localhost:11434
SERVER_VERSIONNoThe version of the server1.0.0
GEMMA_SYSTEM_PROMPTNoSystem prompt for the Gemma modelYou are a supportive and empathetic AI assistant focused on human well-being. Provide considerate and balanced advice.
LLAMA_SYSTEM_PROMPTNoSystem prompt for the Llama modelYou are a logical and analytical AI assistant. Think step-by-step and explain your reasoning clearly.
DEEPSEEK_SYSTEM_PROMPTNoSystem prompt for the Deepseek modelYou are a creative and innovative AI assistant. Think outside the box and offer novel perspectives.

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
list-available-models

List all available models in Ollama that can be used with query-models

query-models

Query multiple AI models via Ollama and get their responses to compare perspectives