Provides tools for interacting with locally running Ollama models, including listing available models, chatting with conversation history, generating responses from prompts, pulling new models from the registry, and deleting models from local installation.
Ollama MCP Server
A Model Context Protocol (MCP) server that provides tools for interacting with Ollama models. This server enables AI assistants to list, chat with, generate responses from, and manage Ollama models through a standardized protocol.
π Features
Model Management: List, pull, and delete Ollama models
Chat Interface: Multi-turn conversations with models
Text Generation: Single-prompt text generation
Dual Transport: Stdio (local) and HTTP (remote) support
Railway Ready: Pre-configured for Railway deployment
Type Safe: Full TypeScript implementation with strict typing
π Prerequisites
Node.js 18+
Ollama installed and running locally
For Railway deployment: Railway CLI
π οΈ Installation
Local Development
Clone and install dependencies:
git clone <repository-url> cd ollama-mcp npm installBuild the project:
npm run buildStart the server:
npm start
Using with Cursor
Add this to your Cursor MCP configuration (~/.cursor/mcp/config.json):
Quick setup:
ποΈ Architecture
The project is structured for maximum readability and maintainability:
See ARCHITECTURE.md for detailed architecture documentation.
π§ Configuration
Environment Variables
Variable | Description | Default |
| Transport type (
or
) |
|
| Ollama API base URL |
|
| HTTP server host (HTTP mode) |
|
| HTTP server port (HTTP mode) |
|
| CORS allowed origins (HTTP mode) | None |
Transport Modes
Stdio Transport (Default)
Perfect for local development and direct integration:
HTTP Transport
Ideal for remote deployment and web-based clients:
π Deployment
Railway Deployment
Install Railway CLI:
npm install -g @railway/cli railway loginDeploy:
railway upAdd models (optional):
railway shell # Follow instructions in docs/RAILWAY_MODELS_SETUP.md
The Railway deployment automatically uses HTTP transport and exposes:
MCP Endpoint:
https://your-app.railway.app/mcpHealth Check:
https://your-app.railway.app/healthz
Docker Deployment
π Available Tools
The server provides 5 MCP tools for Ollama interaction:
ollama_list_models- List available modelsollama_chat- Multi-turn conversationsollama_generate- Single-prompt generationollama_pull_model- Download modelsollama_delete_model- Remove models
See API.md for detailed API documentation.
π§ͺ Testing
Local Testing
Model Testing
π Documentation
Architecture - Detailed system architecture
API Reference - Complete API documentation
Railway Setup - Model deployment guide
π€ Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
π License
MIT License - see LICENSE for details.
π Troubleshooting
Common Issues
"Cannot find module" errors:
Ollama connection issues:
Railway deployment issues:
Getting Help
Check the documentation
Review troubleshooting guide
Open an issue on GitHub
Built with β€οΈ for the AI community
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Enables interaction with locally running Ollama models through chat, generation, and model management operations. Supports listing, downloading, and deleting models while maintaining conversation history for interactive sessions.