Skip to main content
Glama
Atomic-Germ

MCP Ollama Consult Server

compare_ollama_models

Compare outputs from multiple Ollama AI models by running the same prompt simultaneously to evaluate different responses and capabilities.

Instructions

Run the same prompt against multiple Ollama models and return their outputs side-by-side for comparison. Requested models that are unavailable automatically fall back to cloud models or local alternatives. Handles unavailable models gracefully without breaking the comparison.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelsNo
promptYes
system_promptNo

Implementation Reference

  • Main handler logic for 'compare_ollama_models': validates input, resolves models with fallbacks if provided or uses top 2 available, generates responses from each via Ollama API, collects side-by-side results or errors.
    case "compare_ollama_models": { let modelsArg = args?.models as string[] | undefined; const prompt = args?.prompt as string; const system_prompt = args?.system_prompt as string | undefined; if (!prompt) { return { content: [ { type: "text", text: "Missing required argument: prompt", }, ], isError: true, }; } let modelsToUse: string[] = []; if (Array.isArray(modelsArg) && modelsArg.length > 0) { // Resolve each requested model with fallback const resolved = await Promise.all(modelsArg.map((m) => resolveModel(m))); modelsToUse = resolved.map((r) => r.model); } else { try { const resp = await axios.get(`${OLLAMA_BASE_URL}/api/tags`); modelsToUse = (resp.data.models || []).map((m: any) => m.name).slice(0, 2); } catch (err) { modelsToUse = ["llama2"]; } } const contents: any[] = []; for (const m of modelsToUse) { try { const gen = await axios.post(`${OLLAMA_BASE_URL}/api/generate`, { model: m, prompt, system: system_prompt, stream: false, }); contents.push({ type: "text", text: `Model ${m}:\n${gen.data.response}` }); } catch (e) { const message = e instanceof Error ? e.message : String(e); contents.push({ type: "text", text: `Model ${m} failed: ${message}` }); } } return { content: contents }; }
  • Tool registration in listTools(): defines name, description, and input schema for compare_ollama_models.
    { name: "compare_ollama_models", description: "Run the same prompt against multiple Ollama models and return their outputs side-by-side for comparison. Requested models that are unavailable automatically fall back to cloud models or local alternatives. Handles unavailable models gracefully without breaking the comparison.", inputSchema: { type: "object", properties: { models: { type: "array", items: { type: "string" } }, prompt: { type: "string" }, system_prompt: { type: "string" }, }, required: ["prompt"], }, },
  • Input schema defining parameters: models (array of strings, optional), prompt (string, required), system_prompt (string, optional).
    inputSchema: { type: "object", properties: { models: { type: "array", items: { type: "string" } }, prompt: { type: "string" }, system_prompt: { type: "string" }, }, required: ["prompt"], },
  • Helper function resolveModel used by the handler to automatically fallback unavailable models to cloud (e.g., deepseek-v3.1:671b-cloud) or local alternatives like mistral.
    async function resolveModel(requestedModel: string): Promise<{ model: string; isCloud: boolean; fallback: boolean; }> { // If it's explicitly a cloud model, use it if (requestedModel.includes(":cloud")) { return { model: requestedModel, isCloud: true, fallback: false }; } // Check if it's available locally const available = await getAvailableLocalModels(); if (available.includes(requestedModel)) { return { model: requestedModel, isCloud: false, fallback: false }; } // Not available locally - try cloud models for (const cloudModel of CLOUD_MODELS) { // In real implementation, you might verify cloud models are actually available // For now, we assume they are return { model: cloudModel, isCloud: true, fallback: true }; } // If cloud fails, fall back to local models for (const localFallback of LOCAL_FALLBACKS) { if (available.includes(localFallback)) { return { model: localFallback, isCloud: false, fallback: true }; } } // Last resort: use whatever is available, or the original request if (available.length > 0) { return { model: available[0], isCloud: false, fallback: true }; } // If nothing is available, return the requested model anyway // (will fail gracefully in the actual API call) return { model: requestedModel, isCloud: false, fallback: true }; }
  • Cached helper to fetch list of available local Ollama models from /api/tags endpoint.
    async function getAvailableLocalModels(): Promise<string[]> { const now = Date.now(); if (availableModelsCache && now - cacheTimestamp < CACHE_TTL) { return availableModelsCache; } try { const response = await axios.get(`${OLLAMA_BASE_URL}/api/tags`); const models = (response.data.models || []).map((m: any) => m.name); availableModelsCache = models; cacheTimestamp = now; return models; } catch { // If we can't reach Ollama, return empty (we'll fall back to cloud) return []; } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Atomic-Germ/mcp-consult'

If you have feedback or need assistance with the MCP directory API, please join our Discord server