Provides tools for interacting with locally running Ollama models, including listing available models, chatting with conversation history, generating responses from prompts, pulling new models from the registry, and deleting models from local installation.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Ollama MCP Serverchat with llama2 about quantum computing"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Ollama MCP Server
A Model Context Protocol (MCP) server that provides tools for interacting with Ollama models. This server enables AI assistants to list, chat with, generate responses from, and manage Ollama models through a standardized protocol.
π Features
Model Management: List, pull, and delete Ollama models
Chat Interface: Multi-turn conversations with models
Text Generation: Single-prompt text generation
Dual Transport: Stdio (local) and HTTP (remote) support
Railway Ready: Pre-configured for Railway deployment
Type Safe: Full TypeScript implementation with strict typing
π Prerequisites
Node.js 18+
Ollama installed and running locally
For Railway deployment: Railway CLI
π οΈ Installation
Local Development
Clone and install dependencies:
git clone <repository-url> cd ollama-mcp npm installBuild the project:
npm run buildStart the server:
npm start
Using with Cursor
Add this to your Cursor MCP configuration (~/.cursor/mcp/config.json):
{
"mcpServers": {
"ollama": {
"command": "node",
"args": ["/path/to/ollama-mcp/dist/main.js"]
}
}
}Quick setup:
curl -sSL https://raw.githubusercontent.com/your-repo/ollama-mcp/main/config/mcp.config.json -o ~/.cursor/mcp/config.jsonποΈ Architecture
The project is structured for maximum readability and maintainability:
src/
βββ main.ts # Main entry point
βββ config/ # Configuration management
βββ server/ # Core MCP server
βββ tools/ # MCP tool implementations
βββ transports/ # Communication transports
βββ ollama-client.ts # Ollama API client
docs/ # Comprehensive documentation
config/ # Configuration files
scripts/ # Deployment scriptsSee ARCHITECTURE.md for detailed architecture documentation.
π§ Configuration
Environment Variables
Variable | Description | Default |
| Transport type ( |
|
| Ollama API base URL |
|
| HTTP server host (HTTP mode) |
|
| HTTP server port (HTTP mode) |
|
| CORS allowed origins (HTTP mode) | None |
Transport Modes
Stdio Transport (Default)
Perfect for local development and direct integration:
npm startHTTP Transport
Ideal for remote deployment and web-based clients:
MCP_TRANSPORT=http npm startπ Deployment
Railway Deployment
Install Railway CLI:
npm install -g @railway/cli railway loginDeploy:
railway upAdd models (optional):
railway shell # Follow instructions in docs/RAILWAY_MODELS_SETUP.md
The Railway deployment automatically uses HTTP transport and exposes:
MCP Endpoint:
https://your-app.railway.app/mcpHealth Check:
https://your-app.railway.app/healthz
Docker Deployment
# Build the image
npm run docker:build
# Run locally
npm run docker:run
# Deploy to Railway
railway upπ Available Tools
The server provides 5 MCP tools for Ollama interaction:
ollama_list_models- List available modelsollama_chat- Multi-turn conversationsollama_generate- Single-prompt generationollama_pull_model- Download modelsollama_delete_model- Remove models
See API.md for detailed API documentation.
π§ͺ Testing
Local Testing
# Test stdio transport
npm start
# Test HTTP transport
MCP_TRANSPORT=http npm start
# Test health check (HTTP mode)
curl http://localhost:8080/healthzModel Testing
# List available models
ollama list
# Test a model
ollama run llama2 "Hello, how are you?"π Documentation
Architecture - Detailed system architecture
API Reference - Complete API documentation
Railway Setup - Model deployment guide
π€ Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
π License
MIT License - see LICENSE for details.
π Troubleshooting
Common Issues
"Cannot find module" errors:
npm install
npm run buildOllama connection issues:
# Check if Ollama is running
ollama list
# Check Ollama service
ollama serveRailway deployment issues:
# Check Railway logs
railway logs
# Verify environment variables
railway variablesGetting Help
Check the documentation
Review troubleshooting guide
Open an issue on GitHub
Built with β€οΈ for the AI community