Skip to main content
Glama
hyzhak
by hyzhak

chat_completion

Generate text responses using OpenAI-compatible chat completion API. Supports multimodal inputs, including images, for advanced AI interactions on the Ollama MCP Server.

Instructions

OpenAI-compatible chat completion API. Supports optional images per message for vision/multimodal models.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
messagesYes
modelYes
temperatureNo
thinkNo

Implementation Reference

  • The handler function for the 'chat_completion' tool. It invokes Ollama's chat API with the input parameters and formats the response as an OpenAI-compatible chat completion object in JSON.
    async ({ model, messages, temperature, think }) => { try { const response = await ollama.chat({ model, messages, options: { temperature }, ...(think !== undefined ? { think } : {}), }); return { content: [ { type: "text", text: JSON.stringify({ id: "chatcmpl-" + Date.now(), object: "chat.completion", created: Math.floor(Date.now() / 1000), model, choices: [ { index: 0, message: response.message, finish_reason: "stop", }, ], }, null, 2), }, ], }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } }
  • The schema definition for the 'chat_completion' tool, including title, description, and Zod input schema validating model, messages (with roles, content, optional images), temperature, and think parameters.
    { title: "Chat completion", description: "OpenAI-compatible chat completion API. Supports optional images per message for vision/multimodal models.", inputSchema: { model: z.string(), messages: z.array(z.object({ role: z.enum(["system", "user", "assistant"]), content: z.string(), images: z.array(z.string()).optional(), // Array of image paths })), temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, },
  • src/index.ts:192-239 (registration)
    The registration of the 'chat_completion' tool using server.registerTool, including the tool name, schema, and handler function.
    "chat_completion", { title: "Chat completion", description: "OpenAI-compatible chat completion API. Supports optional images per message for vision/multimodal models.", inputSchema: { model: z.string(), messages: z.array(z.object({ role: z.enum(["system", "user", "assistant"]), content: z.string(), images: z.array(z.string()).optional(), // Array of image paths })), temperature: z.number().min(0).max(2).optional(), think: z.boolean().optional(), }, }, async ({ model, messages, temperature, think }) => { try { const response = await ollama.chat({ model, messages, options: { temperature }, ...(think !== undefined ? { think } : {}), }); return { content: [ { type: "text", text: JSON.stringify({ id: "chatcmpl-" + Date.now(), object: "chat.completion", created: Math.floor(Date.now() / 1000), model, choices: [ { index: 0, message: response.message, finish_reason: "stop", }, ], }, null, 2), }, ], }; } catch (error) { return { content: [{ type: "text", text: `Error: ${formatError(error)}` }], isError: true }; } } );

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hyzhak/ollama-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server