Skip to main content
Glama

Ollama MCP Server

by NightTrek

Ollama MCP Server

🚀 A powerful bridge between Ollama and the Model Context Protocol (MCP), enabling seamless integration of Ollama's local LLM capabilities into your MCP-powered applications.

🌟 Features

Complete Ollama Integration

  • Full API Coverage: Access all essential Ollama functionality through a clean MCP interface
  • OpenAI-Compatible Chat: Drop-in replacement for OpenAI's chat completion API
  • Local LLM Power: Run AI models locally with full control and privacy

Core Capabilities

  • 🔄 Model Management
    • Pull models from registries
    • Push models to registries
    • List available models
    • Create custom models from Modelfiles
    • Copy and remove models
  • 🤖 Model Execution
    • Run models with customizable prompts
    • Chat completion API with system/user/assistant roles
    • Configurable parameters (temperature, timeout)
    • Raw mode support for direct responses
  • 🛠 Server Control
    • Start and manage Ollama server
    • View detailed model information
    • Error handling and timeout management

🚀 Getting Started

Prerequisites

  • Ollama installed on your system
  • Node.js and npm/pnpm

Installation

  1. Install dependencies:
pnpm install
  1. Build the server:
pnpm run build

Configuration

Add the server to your MCP configuration:

For Claude Desktop:

MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%/Claude/claude_desktop_config.json

{ "mcpServers": { "ollama": { "command": "node", "args": ["/path/to/ollama-server/build/index.js"], "env": { "OLLAMA_HOST": "http://127.0.0.1:11434" // Optional: customize Ollama API endpoint } } } }

🛠 Usage Examples

Pull and Run a Model

// Pull a model await mcp.use_mcp_tool({ server_name: "ollama", tool_name: "pull", arguments: { name: "llama2" } }); // Run the model await mcp.use_mcp_tool({ server_name: "ollama", tool_name: "run", arguments: { name: "llama2", prompt: "Explain quantum computing in simple terms" } });

Chat Completion (OpenAI-compatible)

await mcp.use_mcp_tool({ server_name: "ollama", tool_name: "chat_completion", arguments: { model: "llama2", messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "What is the meaning of life?" } ], temperature: 0.7 } });

Create Custom Model

await mcp.use_mcp_tool({ server_name: "ollama", tool_name: "create", arguments: { name: "custom-model", modelfile: "./path/to/Modelfile" } });

🔧 Advanced Configuration

  • OLLAMA_HOST: Configure custom Ollama API endpoint (default: http://127.0.0.1:11434)
  • Timeout settings for model execution (default: 60 seconds)
  • Temperature control for response randomness (0-2 range)

🤝 Contributing

Contributions are welcome! Feel free to:

  • Report bugs
  • Suggest new features
  • Submit pull requests

📝 License

MIT License - feel free to use in your own projects!


Built with ❤️ for the MCP ecosystem

Deploy Server
A
security – no known vulnerabilities
F
license - not found
A
quality - confirmed to work

local-only server

The server can only run on the client's local machine because it depends on local resources.

Un puente que permite la integración perfecta de las capacidades LLM locales de Ollama en aplicaciones impulsadas por MCP, lo que permite a los usuarios administrar y ejecutar modelos de IA localmente con cobertura de API completa.

  1. 🌟 Características
    1. Integración completa con Ollama
    2. Capacidades principales
  2. 🚀 Primeros pasos
    1. Prerrequisitos
    2. Instalación
    3. Configuración
  3. 🛠 Ejemplos de uso
    1. Extraer y ejecutar un modelo
    2. Finalización de chat (compatible con OpenAI)
    3. Crear un modelo personalizado
  4. 🔧 Configuración avanzada
    1. 🤝 Contribuyendo
      1. 📝 Licencia

        Related MCP Servers

        • A
          security
          A
          license
          A
          quality
          MCP Ollama server integrates Ollama models with MCP clients, allowing users to list models, get detailed information, and interact with them through questions.
          Last updated -
          3
          26
          MIT License
          • Apple
        • -
          security
          A
          license
          -
          quality
          Enables seamless integration between Ollama's local LLM models and MCP-compatible applications, supporting model management and chat interactions.
          Last updated -
          77
          96
          AGPL 3.0
        • -
          security
          F
          license
          -
          quality
          A generic Model Context Protocol framework for building AI-powered applications that provides standardized ways to create MCP servers and clients for integrating LLMs with support for Ollama and Supabase.
          Last updated -
        • -
          security
          F
          license
          -
          quality
          A server that enables seamless integration between local Ollama LLM instances and MCP-compatible applications, providing advanced task decomposition, evaluation, and workflow management capabilities.
          Last updated -
          6
          • Apple

        View all related MCP servers

        MCP directory API

        We provide all the information about MCP servers via our MCP API.

        curl -X GET 'https://glama.ai/api/mcp/v1/servers/NightTrek/Ollama-mcp'

        If you have feedback or need assistance with the MCP directory API, please join our Discord server