MCP Servers for Ollama

Ollama is an open-source project that allows you to run large language models (LLMs) locally on your own hardware, providing a way to use AI capabilities privately without sending data to external services.

View all MCP Servers

  • Why this server?

    Allows access to LLMs hosted through Ollama via the LLM_MODEL_PROVIDER environment variable

    A
    security
    A
    license
    A
    quality
    An MCP server that provides LLMs access to other LLMs
    4
    20
    8
    JavaScript
    MIT License
  • Why this server?

    Integration with Ollama for local language model inference to power browser automation

    A
    security
    A
    license
    A
    quality
    Facilitates browser automation with custom capabilities and agent-based interactions, integrated through the browser-use library.
    1
    147
    Python
    MIT License
    • Apple
  • Why this server?

    Allows integration with Ollama, enabling use of Ollama models through the MCP interface. Provides capabilities to list models, get model details, and ask questions to Ollama models.

    A
    security
    A
    license
    A
    quality
    MCP Ollama server integrates Ollama models with MCP clients, allowing users to list models, get detailed information, and interact with them through questions.
    3
    10
    Python
    MIT License
    • Apple
  • Why this server?

    Provides integration with Ollama's LLM server, allowing interactive chat with Ollama models while using the Bybit tools to access cryptocurrency data.

    A
    security
    A
    license
    A
    quality
    A Model Context Protocol server that provides read-only access to Bybit's cryptocurrency exchange API, allowing users to query real-time cryptocurrency data using natural language.
    9
    5
    TypeScript
    MIT License
  • Why this server?

    Supports integration with Ollama for local execution of Large Language Models, providing an alternative to cloud-based AI providers.

    A
    security
    F
    license
    A
    quality
    Enables AI agents to interact with web browsers using natural language, featuring automated browsing, form filling, vision-based element detection, and structured JSON responses for systematic browser control.
    1
    15
    Python
    • Linux
    • Apple
  • Why this server?

    Provides access to Deepseek reasoning content through a local Ollama server

    A
    security
    F
    license
    A
    quality
    Provides reasoning content to MCP-enabled AI clients by interfacing with Deepseek's API or a local Ollama server, enabling focused reasoning and thought process visualization.
    1
    11
    16
    JavaScript
  • Why this server?

    Allows querying Ollama models directly from Claude with performance tracking, supporting selection of different models and providing context for queries.

    A
    security
    F
    license
    A
    quality
    Facilitates initiating Ollama queries via Claude and manages a simple note storage system with capabilities to add, summarize, and access notes using custom URIs.
    1
    2
    Python
    • Apple
  • Why this server?

    Supports Ollama as an LLM provider through API key integration

    A
    security
    F
    license
    A
    quality
    Enables browser automation using Python scripts, offering operations like taking webpage screenshots, retrieving HTML content, and executing JavaScript.
    4
    14
    Python
    • Linux
  • Why this server?

    Provides complete integration with Ollama, allowing users to pull, push, list, create, copy, and run local LLM models. Includes model management, execution of models with customizable prompts, and an OpenAI-compatible chat completion API.

    A
    security
    F
    license
    A
    quality
    A bridge that enables seamless integration of Ollama's local LLM capabilities into MCP-powered applications, allowing users to manage and run AI models locally with full API coverage.
    10
    18
    JavaScript
    • Apple
  • Why this server?

    Supports integration with Ollama through MCPHost as a free alternative to Claude, enabling LLMs to interact with the MCP server

    -
    security
    A
    license
    -
    quality
    A Model Context Protocol server that enables AI agents to query Erick Wendel's talks, blog posts, and videos across different platforms using natural language.
    4
    TypeScript
    MIT License
  • Why this server?

    Provides local embeddings generation using Ollama's nomic-embed-text model as an alternative to cloud-based embedding services.

    -
    security
    A
    license
    -
    quality
    An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context. Uses Ollama or OpenAI to generate embeddings. Docker files included
    345
    11
    TypeScript
    MIT License
    • Apple
    • Linux
  • Why this server?

    Enables research capabilities using any local LLM hosted by Ollama, supporting models like deepseek-r1 and llama3.2

    -
    security
    A
    license
    -
    quality
    This is a Model Context Protocol (MCP) server adaptation of LangChain Ollama Deep Researcher. It provides the deep research capabilities as MCP tools that can be used within the model context protocol ecosystem, allowing AI assistants to perform in-depth research on topics (locally) via Ollama
    3
    Python
    MIT License
    • Apple
    • Linux
  • Why this server?

    Optionally connects to an Ollama server for prompt generation using LLMs hosted on Ollama

    -
    security
    A
    license
    -
    quality
    The Comfy MCP Server uses the FastMCP framework to generate images from prompts by interacting with a remote Comfy server, allowing automated image creation based on workflow configurations.
    5
    Python
    MIT License
  • Why this server?

    Uses Ollama's embedding models (particularly nomic-embed-text) for creating vector embeddings for documentation search

    -
    security
    A
    license
    -
    quality
    A Model Context Protocol (MCP) server that enables semantic search and retrieval of documentation using a vector database (Qdrant). This server allows you to add documentation from URLs or local files and then search through them using natural language queries.
    106
    72
    JavaScript
    Apache 2.0
    • Apple
  • Why this server?

    Provides access to Ollama's local LLM models through a Model Context Protocol server, allowing listing, pulling, and chatting with Ollama models

    -
    security
    A
    license
    -
    quality
    Enables seamless integration between Ollama's local LLM models and MCP-compatible applications, supporting model management and chat interactions.
    22
    11
    TypeScript
    MIT License
  • Why this server?

    Used for the default summarization and embedding models required by the server, specifically the snowflake-arctic-embed2 and llama3.1:8b models.

    -
    security
    A
    license
    -
    quality
    A Model Context Protocol (MCP) server that enables LLMs to interact directly the documents that they have on-disk through agentic RAG and hybrid search in LanceDB. Ask LLMs questions about the dataset as a whole or about specific documents.
    14
    30
    TypeScript
    MIT License
    • Apple
  • Why this server?

    Provides free embeddings for vector representation of documents

    -
    security
    A
    license
    -
    quality
    Provides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support.
    7
    4
    TypeScript
    Apache 2.0
  • Why this server?

    Generates vector embeddings for emails using models like nomic-embed-text for enhanced semantic search capabilities

    -
    security
    F
    license
    -
    quality
    Processes emails from Outlook with date filtering, storing them in SQLite databases while generating vector embeddings for semantic search capabilities in MongoDB.
    1
    Python
    • Apple
    • Linux
  • Why this server?

    Integrates with Ollama as a local LLM provider for context-aware querying. Allows users to send prompts to Ollama models with context from local files.

    -
    security
    F
    license
    -
    quality
    This server provides an API to query Large Language Models using context from local files, supporting various models and file types for context-aware responses.
    1
    TypeScript
  • Why this server?

    Uses Ollama for efficient embedding generation, requiring it to be installed and running for vector operations

    -
    security
    F
    license
    -
    quality
    Provides a project memory bank and RAG context provider for enhanced code understanding and management through vector embeddings, integrated with RooCode and Cline.
    9
    Python
    • Apple
  • Why this server?

    Provides integration with Ollama for local AI model usage and processing

    -
    security
    F
    license
    -
    quality
    Facilitates enhanced interaction with large language models (LLMs) by providing intelligent context management, tool integration, and multi-provider AI model coordination for efficient AI-driven workflows.
    Python
  • Why this server?

    Uses Ollama as a Large Language Model provider to determine user intent and route requests

    -
    security
    F
    license
    -
    quality
    Facilitates executing system commands and retrieving web data using the Brave Search API by interpreting user intents via a Large Language Model (LLM).
    1
    Python
  • Why this server?

    Uses Ollama as the default embedding provider for local embeddings generation, supporting semantic documentation search and vector storage.

    -
    security
    F
    license
    -
    quality
    Enables AI assistants to enhance their responses with relevant documentation through a semantic vector search, offering tools for managing and processing documentation efficiently.
    45
    13
    TypeScript
  • Why this server?

    Integrates with Ollama to use the Deepseek model for AI capabilities through the MCP protocol

    -
    security
    F
    license
    -
    quality
    Enables seamless AI integration via Ollama's Deepseek model, providing protocol compliance and automatic configuration for clean AI-driven interactions.
    1
    Python
  • Why this server?

    Leverages Ollama's LLM capabilities to interpret natural language questions, generate SQL queries, and provide AI-powered responses based on database results.

    -
    security
    F
    license
    -
    quality
    An interactive chat interface that combines Ollama's LLM capabilities with PostgreSQL database access through the Model Context Protocol (MCP). Ask questions about your data in natural language and get AI-powered responses backed by real SQL queries.
    25
    TypeScript
  • Why this server?

    Allows communication with locally available Ollama models (like llama2, codellama) while maintaining persistent conversation history.

    -
    security
    F
    license
    -
    quality
    A TypeScript-based server that provides a memory system for Large Language Models (LLMs), allowing users to interact with multiple LLM providers while maintaining conversation history and offering tools for managing providers and model configurations.
    18
    JavaScript
    • Apple