MCP Ollama server integrates Ollama models with MCP clients, allowing users to list models, get detailed information, and interact with them through questions.
A server that enables seamless integration between local Ollama LLM instances and MCP-compatible applications, providing advanced task decomposition, evaluation, and workflow management capabilities.
Enables interaction with locally running Ollama models through chat, generation, and model management operations. Supports listing, downloading, and deleting models while maintaining conversation history for interactive sessions.
Enables complete local Ollama management including listing models, chatting with local LLMs, starting/stopping the server, and getting intelligent model recommendations for specific tasks through natural language commands.
Exposes local Ollama instances as tools for Claude Code, allowing users to offload code generation, text drafting, and embedding tasks to local GPUs. It supports multi-turn conversations and model management through the Model Context Protocol.
A bridge that enables Claude Code to interact with local Ollama instances for text generation, multi-turn chat, and vision-based analysis. It supports model management tasks such as listing, pulling, and showing details, alongside generating text embeddings.