Provides tools for interacting with a local Ollama instance, enabling text generation, multi-turn chat, vision-based image analysis, model management (listing, showing details, pulling), and text embedding generation.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Ollama MCP Serverlist my available local models"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Ollama MCP Server
A bridge to use Ollama as an MCP server from Claude Code.
Features
ollama_generate: Single-turn text generation (supports vision models with image input)
ollama_chat: Multi-turn chat conversations (supports vision models with image input)
ollama_list: List available models
ollama_show: Show model details
ollama_pull: Download models
ollama_embeddings: Generate text embeddings
Supported Vision Models
llava- General-purpose vision modelllama3.2-vision- Meta's multimodal modeldeepseek-ocr- OCR-specialized vision model
Prerequisites
Ollama installed and running
# Install Ollama (macOS) brew install ollama # Start Ollama server ollama serveAt least one model downloaded
ollama pull llama3.2
Installation
Claude Code Configuration
Method 1: Using CLI (Recommended)
To add environment variables:
Method 2: Manual Configuration
Project scope (.mcp.json in project root):
User scope (~/.claude.json):
Verify Installation
Auto-approve Tool Calls (Optional)
By default, Claude Code asks for confirmation each time an Ollama tool is called. To skip confirmations, add the following to ~/.claude/settings.json:
Environment Variables
Variable | Default | Description |
|
| Ollama server URL |
Usage Examples
From Claude Code:
List Models
Text Generation
Chat
Vision / Image Analysis
Troubleshooting
Cannot connect to Ollama
No models available
MCP server not showing up
License
MIT