run
Execute local AI models with text prompts, supporting vision models through image inputs and temperature control for response variation.
Instructions
Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| prompt | Yes | ||
| images | No | ||
| temperature | No | ||
| think | No |
Implementation Reference
- src/index.ts:168-187 (handler)Handler function for the 'run' tool. Executes ollama.generate with model name, prompt, optional images, temperature, and think parameters. Formats response including thinking if present, handles errors.async ({ name, prompt, images, temperature, think }) => { try { const result = await ollama.generate({ model: name, prompt, options: temperature !== undefined ? { temperature } : {}, ...(images ? { images } : {}), ...(think !== undefined ? { think } : {}), }); const content: Array<ContentBlock> = []; if (result?.thinking) { content.push({ type: "text", text: `<think>${result.thinking}</think>` }); } content.push({ type: "text", text: result.response ?? "" }); return { content }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } }
- src/index.ts:157-167 (schema)Input schema definition for the 'run' tool using Zod, defining required name and prompt, optional images array, temperature (0-2), and think boolean.{ title: "Run model", description: "Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.", inputSchema: { name: z.string(), prompt: z.string(), images: z.array(z.string()).optional(), // Array of image paths temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, },
- src/index.ts:155-188 (registration)Registration of the 'run' tool using server.registerTool, including name, schema, and inline handler function.server.registerTool( "run", { title: "Run model", description: "Run a model with a prompt. Optionally accepts an image file path for vision/multimodal models and a temperature parameter.", inputSchema: { name: z.string(), prompt: z.string(), images: z.array(z.string()).optional(), // Array of image paths temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, }, async ({ name, prompt, images, temperature, think }) => { try { const result = await ollama.generate({ model: name, prompt, options: temperature !== undefined ? { temperature } : {}, ...(images ? { images } : {}), ...(think !== undefined ? { think } : {}), }); const content: Array<ContentBlock> = []; if (result?.thinking) { content.push({ type: "text", text: `<think>${result.thinking}</think>` }); } content.push({ type: "text", text: result.response ?? "" }); return { content }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } } );