Ollama-Omega
Ollama-Omega is a hardened MCP server that bridges the full Ollama ecosystem, letting you interact with local and cloud-hosted AI models from any MCP-compatible IDE through six validated tools:
Check server health (
ollama_health): Verify connectivity to the Ollama daemon and see which models are currently loaded in memory.List available models (
ollama_list_models): Retrieve all models with details like size, loaded status, and modification date.Chat with a model (
ollama_chat): Send multi-turn chat completion requests with message history, optional system prompts, and configurable parameters like temperature and max tokens.Generate text (
ollama_generate): Generate a response from a single prompt (no chat history), with optional system prompt and sampling controls.Inspect a model (
ollama_show_model): View detailed information about a specific model, including its license, parameters, and configuration.Download a model (
ollama_pull_model): Pull any model from the Ollama library directly through the MCP interface, with support for large cloud models via extended timeouts.
All operations are secured with SSRF protection, input validation, error sanitization, and structured logging.
Bridges the full Ollama ecosystem into MCP-compatible IDEs, providing tools for health checks, listing available models, chat completions, text generation, model information retrieval, and model downloads from the Ollama library.
OLLAMA-OMEGA
MCP server — Ollama bridge for any IDE. Sovereign compute. No cloud dependency.
Ecosystem Canon
Ollama-Omega is the compute interface layer of the VERITAS & Sovereign Ecosystem (Omega Universe). It surfaces every locally installed Ollama model — and any cloud-hosted model accessible through an Ollama daemon — as a structured MCP tool set inside any MCP-compatible IDE or agent runtime.
Within the Omega Universe, governance flows downward from omega-brain-mcp (the VERITAS gate and approval pipeline) to Ollama-Omega (the inference transport). Ollama-Omega is the final execution node: it issues the prompt, receives model output, and returns a validated, schema-typed response. No inference executes before the upstream gate approves the request.
Ollama-Omega does not perform memory, authentication, persistence, or policy enforcement. Those responsibilities belong to the operators above it in the stack. This node does one thing: connect IDE to Ollama, reliably and without information loss.
Overview
What it is:
A single-file MCP server (
ollama_mcp_server.py) that bridges Ollama into any MCP-compatible clientSix validated tools covering health, model listing, chat, generation, model inspection, and model pull
Compatible with Claude Desktop, VS Code + Continue, Cursor, Antigravity IDE, and any other client that speaks MCP over stdio
What it is not:
A full AI platform, memory layer, or policy engine
A replacement for the Ollama daemon — it wraps the daemon's HTTP API over MCP stdio transport
A cloud service — all inference is local or routed through your own Ollama daemon
Features
Feature | Detail |
6 MCP tools | Health check, list models, chat, generate, show model info, pull model |
Stdio transport | JSON-RPC 2.0 over stdin/stdout — no network ports opened by this server |
Typed output schemas | Every tool carries a full |
SSRF mitigation |
|
Input validation |
|
Safe JSON handling |
|
Error sanitization |
|
Cloud model support | Any model accessible on your Ollama daemon is available — no config change required |
Docker-ready |
|
Architecture
IDE / MCP Client
(Claude Desktop, VS Code + Continue, Cursor, Antigravity, ...)
|
| stdio JSON-RPC 2.0
v
+-----------------------------+
| ollama_mcp_server.py |
| Validator | Dispatch |
| Singleton httpx AsyncClient|
+-----------------------------+
|
| HTTP (default: http://localhost:11434)
v
+-----------------------------+
| Ollama Daemon |
| Local models (GPU / CPU) |
| Cloud proxy models |
+-----------------------------+
|
v
Local model store
(~/.ollama/models)The server process lives for the lifetime of the IDE session. One httpx AsyncClient handles all upstream Ollama HTTP traffic. The MCP client never communicates with Ollama directly.
Quickstart
Prerequisites
Python 3.11 or later
Ollama daemon installed and running
Install Ollama
Platform | Method |
Windows | Download the installer from ollama.com/download/windows and run it. Ollama starts automatically as a system tray service. |
macOS | Download from ollama.com/download/mac, or via Homebrew: |
Linux |
|
Verify the daemon is reachable before proceeding:
curl http://localhost:11434
# Expected response: Ollama is runningInstall Ollama-Omega
Option A — pip (simplest):
pip install mcp httpxThen download the server file:
# macOS / Linux
curl -O https://raw.githubusercontent.com/VrtxOmega/Ollama-Omega/master/ollama_mcp_server.py
# Windows (PowerShell)
Invoke-WebRequest -Uri https://raw.githubusercontent.com/VrtxOmega/Ollama-Omega/master/ollama_mcp_server.py -OutFile ollama_mcp_server.pyOption B — clone the repository (recommended for local development):
git clone https://github.com/VrtxOmega/Ollama-Omega.git
cd Ollama-Omega
pip install mcp httpxOption C — uv (virtual-env isolation, recommended for production):
git clone https://github.com/VrtxOmega/Ollama-Omega.git
cd Ollama-Omega
uv syncOption D — Docker:
git clone https://github.com/VrtxOmega/Ollama-Omega.git
cd Ollama-Omega
docker build -t ollama-omega .
# Run with stdio transport for IDE integration:
docker run -i --rm -e OLLAMA_HOST=http://host.docker.internal:11434 ollama-omegaPull a model
ollama pull llama3.2:3bConfigure your MCP client
Edit the configuration file for your IDE and add the ollama server block. Replace /path/to/Ollama-Omega with the actual path to your clone (or the directory containing ollama_mcp_server.py).
Claude Desktop
Config file locations:
Platform | Path |
Windows |
|
macOS / Linux |
|
{
"mcpServers": {
"ollama": {
"command": "python",
"args": ["/path/to/Ollama-Omega/ollama_mcp_server.py"],
"env": {
"PYTHONUTF8": "1",
"OLLAMA_HOST": "http://localhost:11434",
"OLLAMA_TIMEOUT": "300"
}
}
}
}With uv (virtual-env isolation):
{
"mcpServers": {
"ollama": {
"command": "uv",
"args": [
"--directory",
"/path/to/Ollama-Omega",
"run",
"python",
"ollama_mcp_server.py"
],
"env": {
"PYTHONUTF8": "1",
"OLLAMA_HOST": "http://localhost:11434",
"OLLAMA_TIMEOUT": "300"
}
}
}
}VS Code + Continue / Cursor
Most MCP-compatible VS Code extensions follow the same JSON structure under their own config key. Substitute the command and args block from the Claude Desktop example above. Consult your extension's documentation for the exact config file path.
Antigravity IDE
Config file: ~/.gemini/antigravity/mcp_config.json
{
"mcpServers": {
"ollama": {
"command": "uv",
"args": [
"--directory",
"/path/to/Ollama-Omega",
"run",
"python",
"ollama_mcp_server.py"
],
"env": {
"PYTHONUTF8": "1",
"OLLAMA_HOST": "http://localhost:11434"
}
}
}
}Restart your IDE after saving the configuration file. Verify connectivity by calling the ollama_health tool from your IDE.
Configuration
Variable | Default | Description |
|
| Base URL of the Ollama daemon. Override to point at a remote or containerized daemon. |
|
| HTTP request timeout in seconds. Increase for large model pulls or slow cloud inference. |
| (unset) | Set to |
Cloud-hosted models exposed by your Ollama daemon (e.g., qwen3.5:397b-cloud via API proxy) are accessible through the same 6 tools with no configuration change. Authenticate first with ollama login.
Troubleshooting
Ollama daemon not running
# Start the daemon
ollama serve
# Verify
curl http://localhost:11434If OLLAMA_HOST is set to a non-default value, confirm the URL and port match the daemon's bind address.
Port conflict — daemon fails to start
Ollama binds to port 11434 by default. If that port is occupied:
# macOS / Linux — find the occupying process
lsof -i :11434
# Windows (PowerShell)
netstat -ano | findstr :11434Set OLLAMA_HOST to an alternate port once you have reconfigured the daemon.
Model not found / HTTP 404
The referenced model has not been pulled. Pull it first:
ollama pull <model-name>
# Cloud-hosted models require authentication:
ollama login
ollama pull qwen3.5:397b-cloudAlternatively, call ollama_pull_model from your IDE once the server is connected.
Tools do not appear in the IDE
Confirm the
commandpath resolves to a working Python 3.11+ interpreter.Confirm
mcpandhttpxare installed in that interpreter's environment.Restart the IDE — MCP servers are discovered at startup, not while running.
Check IDE logs for JSON-RPC handshake errors.
Windows: UnicodeEncodeError or garbled output
Set PYTHONUTF8=1 in the server's env block. This is already shown in the configuration examples above.
Docker: cannot reach localhost:11434
Docker containers run in an isolated network namespace. Replace localhost with host.docker.internal:
docker run -i --rm -e OLLAMA_HOST=http://host.docker.internal:11434 ollama-omegaOn Linux hosts, --network=host may be required instead.
Request timed out after 300s
Cold inference on large models (70B+) or cloud-proxied models can exceed the default timeout. Increase it in your MCP client config:
"env": { "OLLAMA_TIMEOUT": "600" }Security and Sovereignty
Ollama-Omega runs exclusively on localhost by default, communicating with the Ollama daemon over the loopback interface. No data leaves the machine unless your Ollama daemon is configured to proxy to a cloud endpoint.
Hardening applied to this server:
Control | Implementation |
SSRF prevention |
|
Input sanitization | Required-argument validation before any outbound HTTP call |
Error sanitization | Internal errors are never forwarded to the MCP client |
Non-root Docker | Container process runs as a dedicated |
Limitations and out-of-scope items:
Authentication between the MCP client and this server is not implemented. MCP stdio transport is inherently scoped to the local process boundary.
This server does not validate the content of prompts or model outputs. Content policy enforcement is the responsibility of the upstream operator (see
omega-brain-mcp).Network isolation, host-level security, and key management are outside the scope of this component.
Omega Universe
Ollama-Omega is one node in the VERITAS & Sovereign Ecosystem. Cross-references:
Repository | Role in the stack |
VERITAS gate + cryptographic audit ledger + Cortex approval pipeline. The governance layer above Ollama-Omega. | |
Long-term retention substrate. Stores artifacts, attestations, and approved outputs. | |
Security enforcement layer. Threat surface scanning and sovereign boundary enforcement. | |
Drift detection and configuration integrity monitoring across Omega operators. | |
Media processing and content pipeline within the Omega framework. | |
Operator sandbox and demonstration environment for Omega Universe components. |
🌐 VERITAS Omega Ecosystem
This project is part of the VERITAS Omega Universe — a sovereign AI infrastructure stack.
VERITAS-Omega-CODE — Deterministic verification spec (10-gate pipeline)
omega-brain-mcp — Governance MCP server (Triple-A rated on Glama)
Gravity-Omega — Desktop AI operator platform
Ollama-Omega — Ollama MCP bridge for any IDE
OmegaWallet — Desktop Ethereum wallet (renderer-cannot-sign)
veritas-vault — Local-first AI knowledge engine
sovereign-arcade — 8-game arcade with VERITAS design system
SSWP — Deterministic build attestation protocol
License
MIT — see LICENSE.
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/VrtxOmega/Ollama-Omega'
If you have feedback or need assistance with the MCP directory API, please join our Discord server