Skip to main content
Glama
hyzhak

Ollama MCP Server

by hyzhak

run

Execute local AI models with text prompts, supporting vision models through image inputs and temperature control for response variation.

Instructions

Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYes
promptYes
imagesNo
temperatureNo
thinkNo

Implementation Reference

  • Handler function for the 'run' tool. Executes ollama.generate with model name, prompt, optional images, temperature, and think parameters. Formats response including thinking if present, handles errors.
    async ({ name, prompt, images, temperature, think }) => { try { const result = await ollama.generate({ model: name, prompt, options: temperature !== undefined ? { temperature } : {}, ...(images ? { images } : {}), ...(think !== undefined ? { think } : {}), }); const content: Array<ContentBlock> = []; if (result?.thinking) { content.push({ type: "text", text: `<think>${result.thinking}</think>` }); } content.push({ type: "text", text: result.response ?? "" }); return { content }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } }
  • Input schema definition for the 'run' tool using Zod, defining required name and prompt, optional images array, temperature (0-2), and think boolean.
    { title: "Run model", description: "Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.", inputSchema: { name: z.string(), prompt: z.string(), images: z.array(z.string()).optional(), // Array of image paths temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, },
  • src/index.ts:155-188 (registration)
    Registration of the 'run' tool using server.registerTool, including name, schema, and inline handler function.
    server.registerTool( "run", { title: "Run model", description: "Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.", inputSchema: { name: z.string(), prompt: z.string(), images: z.array(z.string()).optional(), // Array of image paths temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, }, async ({ name, prompt, images, temperature, think }) => { try { const result = await ollama.generate({ model: name, prompt, options: temperature !== undefined ? { temperature } : {}, ...(images ? { images } : {}), ...(think !== undefined ? { think } : {}), }); const content: Array<ContentBlock> = []; if (result?.thinking) { content.push({ type: "text", text: `<think>${result.thinking}</think>` }); } content.push({ type: "text", text: result.response ?? "" }); return { content }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } } );
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hyzhak/ollama-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server