MCP Servers for Ollama
Ollama is an open-source project that allows you to run large language models (LLMs) locally on your own hardware, providing a way to use AI capabilities privately without sending data to external services.
Why this server?
Allows access to LLMs hosted through Ollama via the LLM_MODEL_PROVIDER environment variable
JavaScriptMIT LicenseWhy this server?
Integration with Ollama for local language model inference to power browser automation
Saik0sVerifiedAsecurityAlicenseAqualityFacilitates browser automation with custom capabilities and agent-based interactions, integrated through the browser-use library.1147PythonMIT LicenseWhy this server?
Allows integration with Ollama, enabling use of Ollama models through the MCP interface. Provides capabilities to list models, get model details, and ask questions to Ollama models.
emgeeeVerifiedAsecurityAlicenseAqualityMCP Ollama server integrates Ollama models with MCP clients, allowing users to list models, get detailed information, and interact with them through questions.310PythonMIT LicenseWhy this server?
Provides integration with Ollama's LLM server, allowing interactive chat with Ollama models while using the Bybit tools to access cryptocurrency data.
AsecurityAlicenseAqualityA Model Context Protocol server that provides read-only access to Bybit's cryptocurrency exchange API, allowing users to query real-time cryptocurrency data using natural language.95TypeScriptMIT LicenseWhy this server?
Supports integration with Ollama for local execution of Large Language Models, providing an alternative to cloud-based AI providers.
AsecurityFlicenseAqualityEnables AI agents to interact with web browsers using natural language, featuring automated browsing, form filling, vision-based element detection, and structured JSON responses for systematic browser control.115PythonWhy this server?
Provides access to Deepseek reasoning content through a local Ollama server
AsecurityFlicenseAqualityProvides reasoning content to MCP-enabled AI clients by interfacing with Deepseek's API or a local Ollama server, enabling focused reasoning and thought process visualization.11116JavaScriptWhy this server?
Allows querying Ollama models directly from Claude with performance tracking, supporting selection of different models and providing context for queries.
AsecurityFlicenseAqualityFacilitates initiating Ollama queries via Claude and manages a simple note storage system with capabilities to add, summarize, and access notes using custom URIs.12PythonWhy this server?
Supports Ollama as an LLM provider through API key integration
AsecurityFlicenseAqualityEnables browser automation using Python scripts, offering operations like taking webpage screenshots, retrieving HTML content, and executing JavaScript.414PythonWhy this server?
Provides complete integration with Ollama, allowing users to pull, push, list, create, copy, and run local LLM models. Includes model management, execution of models with customizable prompts, and an OpenAI-compatible chat completion API.
AsecurityFlicenseAqualityA bridge that enables seamless integration of Ollama's local LLM capabilities into MCP-powered applications, allowing users to manage and run AI models locally with full API coverage.1018JavaScriptWhy this server?
Supports integration with Ollama through MCPHost as a free alternative to Claude, enabling LLMs to interact with the MCP server
-securityAlicense-qualityA Model Context Protocol server that enables AI agents to query Erick Wendel's talks, blog posts, and videos across different platforms using natural language.4TypeScriptMIT LicenseWhy this server?
Provides local embeddings generation using Ollama's nomic-embed-text model as an alternative to cloud-based embedding services.
-securityAlicense-qualityAn MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context. Uses Ollama or OpenAI to generate embeddings. Docker files included34511TypeScriptMIT LicenseWhy this server?
Enables research capabilities using any local LLM hosted by Ollama, supporting models like deepseek-r1 and llama3.2
-securityAlicense-qualityThis is a Model Context Protocol (MCP) server adaptation of LangChain Ollama Deep Researcher. It provides the deep research capabilities as MCP tools that can be used within the model context protocol ecosystem, allowing AI assistants to perform in-depth research on topics (locally) via Ollama3PythonMIT LicenseWhy this server?
Optionally connects to an Ollama server for prompt generation using LLMs hosted on Ollama
-securityAlicense-qualityThe Comfy MCP Server uses the FastMCP framework to generate images from prompts by interacting with a remote Comfy server, allowing automated image creation based on workflow configurations.5PythonMIT LicenseWhy this server?
Uses Ollama's embedding models (particularly nomic-embed-text) for creating vector embeddings for documentation search
-securityAlicense-qualityA Model Context Protocol (MCP) server that enables semantic search and retrieval of documentation using a vector database (Qdrant). This server allows you to add documentation from URLs or local files and then search through them using natural language queries.10672JavaScriptApache 2.0Why this server?
Provides access to Ollama's local LLM models through a Model Context Protocol server, allowing listing, pulling, and chatting with Ollama models
-securityAlicense-qualityEnables seamless integration between Ollama's local LLM models and MCP-compatible applications, supporting model management and chat interactions.2211TypeScriptMIT LicenseWhy this server?
Used for the default summarization and embedding models required by the server, specifically the snowflake-arctic-embed2 and llama3.1:8b models.
-securityAlicense-qualityA Model Context Protocol (MCP) server that enables LLMs to interact directly the documents that they have on-disk through agentic RAG and hybrid search in LanceDB. Ask LLMs questions about the dataset as a whole or about specific documents.1430TypeScriptMIT LicenseWhy this server?
Provides free embeddings for vector representation of documents
-securityAlicense-qualityProvides RAG capabilities for semantic document search using Qdrant vector database and Ollama/OpenAI embeddings, allowing users to add, search, list, and delete documentation with metadata support.74TypeScriptApache 2.0Why this server?
Generates vector embeddings for emails using models like nomic-embed-text for enhanced semantic search capabilities
-securityFlicense-qualityProcesses emails from Outlook with date filtering, storing them in SQLite databases while generating vector embeddings for semantic search capabilities in MongoDB.1PythonWhy this server?
Integrates with Ollama as a local LLM provider for context-aware querying. Allows users to send prompts to Ollama models with context from local files.
-securityFlicense-qualityThis server provides an API to query Large Language Models using context from local files, supporting various models and file types for context-aware responses.1TypeScriptWhy this server?
Uses Ollama for efficient embedding generation, requiring it to be installed and running for vector operations
-securityFlicense-qualityProvides a project memory bank and RAG context provider for enhanced code understanding and management through vector embeddings, integrated with RooCode and Cline.9PythonWhy this server?
Provides integration with Ollama for local AI model usage and processing
-securityFlicense-qualityFacilitates enhanced interaction with large language models (LLMs) by providing intelligent context management, tool integration, and multi-provider AI model coordination for efficient AI-driven workflows.PythonWhy this server?
Uses Ollama as a Large Language Model provider to determine user intent and route requests
-securityFlicense-qualityFacilitates executing system commands and retrieving web data using the Brave Search API by interpreting user intents via a Large Language Model (LLM).1PythonWhy this server?
Uses Ollama as the default embedding provider for local embeddings generation, supporting semantic documentation search and vector storage.
-securityFlicense-qualityEnables AI assistants to enhance their responses with relevant documentation through a semantic vector search, offering tools for managing and processing documentation efficiently.4513TypeScriptWhy this server?
Integrates with Ollama to use the Deepseek model for AI capabilities through the MCP protocol
-securityFlicense-qualityEnables seamless AI integration via Ollama's Deepseek model, providing protocol compliance and automatic configuration for clean AI-driven interactions.1PythonWhy this server?
Leverages Ollama's LLM capabilities to interpret natural language questions, generate SQL queries, and provide AI-powered responses based on database results.
-securityFlicense-qualityAn interactive chat interface that combines Ollama's LLM capabilities with PostgreSQL database access through the Model Context Protocol (MCP). Ask questions about your data in natural language and get AI-powered responses backed by real SQL queries.25TypeScriptWhy this server?
Allows communication with locally available Ollama models (like llama2, codellama) while maintaining persistent conversation history.
-securityFlicense-qualityA TypeScript-based server that provides a memory system for Large Language Models (LLMs), allowing users to interact with multiple LLM providers while maintaining conversation history and offering tools for managing providers and model configurations.18JavaScript