Skip to main content
Glama
Atomic-Germ

MCP Ollama Consult Server

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
OLLAMA_BASE_URLNoThe Ollama endpoint URLhttp://localhost:11434

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription
consult_ollama

Consult an Ollama model with a prompt and get its response for reasoning from another viewpoint. If the requested model is unavailable locally, automatically falls back to: cloud models (deepseek-v3.1:671b-cloud, kimi-k2-thinking:cloud) or local alternatives (mistral, llama2). Never fails on model availability.

list_ollama_models

List all available Ollama models on the local instance.

compare_ollama_models

Run the same prompt against multiple Ollama models and return their outputs side-by-side for comparison. Requested models that are unavailable automatically fall back to cloud models or local alternatives. Handles unavailable models gracefully without breaking the comparison.

remember_consult

Store the result of a consult into a local memory store (or configured memory service).

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Atomic-Germ/mcp-consult'

If you have feedback or need assistance with the MCP directory API, please join our Discord server