Skip to main content
Glama
hyzhak
by hyzhak

run

Execute AI models locally with a prompt, optionally using image inputs or adjusting temperature parameters, enabling controlled and private AI interactions via the MCP server.

Instructions

Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
imagesNo
nameYes
promptYes
temperatureNo
thinkNo

Implementation Reference

  • The handler function for the 'run' tool that executes ollama.generate with model, prompt, optional images, temperature, and think step, formats the response including thinking if present, and handles errors.
    async ({ name, prompt, images, temperature, think }) => { try { const result = await ollama.generate({ model: name, prompt, options: temperature !== undefined ? { temperature } : {}, ...(images ? { images } : {}), ...(think !== undefined ? { think } : {}), }); const content: Array<ContentBlock> = []; if (result?.thinking) { content.push({ type: "text", text: `<think>${result.thinking}</think>` }); } content.push({ type: "text", text: result.response ?? "" }); return { content }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } }
  • The schema definition for the 'run' tool, including title, description, and inputSchema with Zod validators for name, prompt, images, temperature, and think.
    { title: "Run model", description: "Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.", inputSchema: { name: z.string(), prompt: z.string(), images: z.array(z.string()).optional(), // Array of image paths temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, },
  • src/index.ts:155-188 (registration)
    The registration call for the 'run' tool using server.registerTool, specifying the name, schema, and handler function.
    server.registerTool( "run", { title: "Run model", description: "Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.", inputSchema: { name: z.string(), prompt: z.string(), images: z.array(z.string()).optional(), // Array of image paths temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, }, async ({ name, prompt, images, temperature, think }) => { try { const result = await ollama.generate({ model: name, prompt, options: temperature !== undefined ? { temperature } : {}, ...(images ? { images } : {}), ...(think !== undefined ? { think } : {}), }); const content: Array<ContentBlock> = []; if (result?.thinking) { content.push({ type: "text", text: `<think>${result.thinking}</think>` }); } content.push({ type: "text", text: result.response ?? "" }); return { content }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } } );

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hyzhak/ollama-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server