Skip to main content
Glama

ollama_generate

Generate AI text responses using local Ollama models for tasks requiring natural language processing on macOS.

Instructions

Генерирует ответ используя локальную модель Ollama. Используйте для задач, требующих AI обработки текста

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNoНазвание модели Ollama (например, 'llama3.2', 'deepseek-r1:8b'). По умолчанию 'llama3.2'llama3.2
promptYesЗапрос для модели

Implementation Reference

  • Python handler function for ollama_generate tool that sends a POST request to Ollama API to generate text.
    def ollama_generate(prompt: str, model: str = "llama3.2") -> str: """Generates response via Ollama API""" try: response = requests.post( f"{OLLAMA_API_URL}/api/generate", json={"model": model, "prompt": prompt, "stream": False}, timeout=30, ) response.raise_for_status() data = response.json() return data.get("response", "No response from model") except requests.exceptions.ConnectionError: raise Exception( f"Failed to connect to Ollama server ({OLLAMA_API_URL}). " "Make sure Ollama is running: ollama serve" ) except Exception as e: raise Exception(f"Ollama error: {str(e)}")
  • TypeScript handler method for ollama_generate tool that uses fetch to call Ollama API.
    private async ollamaGenerate(prompt: string, model: string = "llama3.2") { try { const response = await fetch(`${OLLAMA_API_URL}/api/generate`, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ model, prompt, stream: false, }), }); if (!response.ok) { const errorText = await response.text(); throw new Error( `Ollama API error: ${response.status} ${errorText}` ); } const data = (await response.json()) as { response?: string }; return { content: [ { type: "text", text: data.response || "Нет ответа от модели", }, ], }; } catch (error) { // Проверяем, доступен ли Ollama сервер if (error instanceof TypeError && error.message.includes("fetch")) { throw new Error( `Не удалось подключиться к Ollama серверу (${OLLAMA_API_URL}). Убедитесь, что Ollama запущен: ollama serve` ); } throw new Error( `Ошибка Ollama: ${error instanceof Error ? error.message : String(error)}` ); }
  • Input schema for ollama_generate tool in the Python MCP server.
    { "name": "ollama_generate", "description": "Generates response using local Ollama model. Use for tasks requiring AI text processing", "inputSchema": { "type": "object", "properties": { "model": { "type": "string", "description": "Ollama model name (e.g., 'llama3.2', 'deepseek-r1:8b'). Default 'llama3.2'", "default": "llama3.2", }, "prompt": { "type": "string", "description": "Prompt for the model", }, }, "required": ["prompt"], }, },
  • Input schema for ollama_generate tool in the TypeScript MCP server.
    { name: "ollama_generate", description: "Генерирует ответ используя локальную модель Ollama. Используйте для задач, требующих AI обработки текста", inputSchema: { type: "object", properties: { model: { type: "string", description: "Название модели Ollama (например, 'llama3.2', 'deepseek-r1:8b'). По умолчанию 'llama3.2'", default: "llama3.2", }, prompt: { type: "string", description: "Запрос для модели", }, }, required: ["prompt"], },
  • src/server.py:611-614 (registration)
    Dispatch/registration point in Python handle_request where ollama_generate is called.
    elif tool_name == "ollama_generate": result_text = ollama_generate( arguments.get("prompt"), arguments.get("model", "llama3.2") )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TrueOleg/MCP-expirements'

If you have feedback or need assistance with the MCP directory API, please join our Discord server