chat_completion
Generate text responses using OpenAI-compatible chat completion API. Supports multimodal inputs, including images, for advanced AI interactions on the Ollama MCP Server.
Instructions
OpenAI-compatible chat completion API. Supports optional images per message for vision/multimodal models.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| messages | Yes | ||
| model | Yes | ||
| temperature | No | ||
| think | No |
Implementation Reference
- src/index.ts:207-238 (handler)The handler function for the 'chat_completion' tool. It invokes Ollama's chat API with the input parameters and formats the response as an OpenAI-compatible chat completion object in JSON.async ({ model, messages, temperature, think }) => { try { const response = await ollama.chat({ model, messages, options: { temperature }, ...(think !== undefined ? { think } : {}), }); return { content: [ { type: "text", text: JSON.stringify({ id: "chatcmpl-" + Date.now(), object: "chat.completion", created: Math.floor(Date.now() / 1000), model, choices: [ { index: 0, message: response.message, finish_reason: "stop", }, ], }, null, 2), }, ], }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } }
- src/index.ts:193-206 (schema)The schema definition for the 'chat_completion' tool, including title, description, and Zod input schema validating model, messages (with roles, content, optional images), temperature, and think parameters.{ title: "Chat completion", description: "OpenAI-compatible chat completion API. Supports optional images per message for vision/multimodal models.", inputSchema: { model: z.string(), messages: z.array(z.object({ role: z.enum(["system", "user", "assistant"]), content: z.string(), images: z.array(z.string()).optional(), // Array of image paths })), temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, },
- src/index.ts:192-239 (registration)The registration of the 'chat_completion' tool using server.registerTool, including the tool name, schema, and handler function."chat_completion", { title: "Chat completion", description: "OpenAI-compatible chat completion API. Supports optional images per message for vision/multimodal models.", inputSchema: { model: z.string(), messages: z.array(z.object({ role: z.enum(["system", "user", "assistant"]), content: z.string(), images: z.array(z.string()).optional(), // Array of image paths })), temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, }, async ({ model, messages, temperature, think }) => { try { const response = await ollama.chat({ model, messages, options: { temperature }, ...(think !== undefined ? { think } : {}), }); return { content: [ { type: "text", text: JSON.stringify({ id: "chatcmpl-" + Date.now(), object: "chat.completion", created: Math.floor(Date.now() / 1000), model, choices: [ { index: 0, message: response.message, finish_reason: "stop", }, ], }, null, 2), }, ], }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } } );