Zabbix MCP Server
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Zabbix MCP Serverlist all active high-severity problems"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
What is this?
MCP (Model Context Protocol) is an open standard that lets AI assistants (ChatGPT, Claude, VS Code Copilot, JetBrains AI, Codex, and others) use external tools. This server exposes the entire Zabbix API as MCP tools — allowing any compatible AI assistant to query hosts, check problems, manage templates, acknowledge events, and perform any other Zabbix operation.
The server runs as a standalone HTTP service. AI clients connect to it over the network.
Features
Complete API coverage - All 57 Zabbix API groups (219 tools): hosts, problems, triggers, templates, users, dashboards, and more
Multi-server support - Connect to multiple Zabbix instances (production, staging, ...) with separate tokens
Single config file - One TOML file, no scattered environment variables
Read-only mode - Per-server write protection to prevent accidental changes
Auto-reconnect - Transparent re-authentication on session expiry
Production-ready - systemd service, logrotate, security hardening
Generic fallback -
zabbix_raw_api_calltool for any API method not explicitly defined
Quick Start
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
sudo ./deploy/install.sh
sudo nano /etc/zabbix-mcp/config.toml # fill in your Zabbix URL + API token
sudo systemctl start zabbix-mcp-server
sudo systemctl enable zabbix-mcp-serverDone. The server is running on http://127.0.0.1:8080/mcp.
Installation
Requirements
Linux server with Python 3.10+
Network access to your Zabbix server(s)
Zabbix API token (User settings > API tokens)
Install
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
sudo ./deploy/install.shThe install script will:
Create a dedicated system user
zabbix-mcp(no login shell)Create a Python virtual environment in
/opt/zabbix-mcp/venvInstall the server and all dependencies
Copy the example config to
/etc/zabbix-mcp/config.tomlInstall a systemd service unit (
zabbix-mcp-server)Set up logrotate for
/var/log/zabbix-mcp/*.log(daily, 30 days retention)
Upgrade
cd zabbix-mcp-server
git pull
sudo ./deploy/install.sh updateThe update command will upgrade the package to the latest version, refresh the systemd unit and logrotate config, and restart the service if it is running.
Configure
Edit the config file with your Zabbix server details:
sudo nano /etc/zabbix-mcp/config.tomlMinimal configuration - just fill in your Zabbix URL and API token:
[server]
transport = "http"
host = "127.0.0.1"
port = 8080
[zabbix.production]
url = "https://zabbix.example.com"
api_token = "your-api-token"
read_only = true
verify_ssl = trueAll available options with detailed descriptions are documented in config.example.toml.
Authentication
The HTTP endpoint can be protected with a bearer token. There are two ways to configure it:
Option 1 - token directly in config:
[server]
auth_token = "your-secret-token-here"Option 2 - token from environment variable (recommended for production):
[server]
auth_token = "${MCP_AUTH_TOKEN}"When auth_token is set, all clients must include it in the Authorization header:
Authorization: Bearer your-secret-token-hereWhen auth_token is not set, the server accepts unauthenticated connections. This is only safe when the server is bound to 127.0.0.1 (default).
Multiple Zabbix servers
You can connect to multiple Zabbix instances. Each tool has a server parameter to select which one to use (defaults to the first defined):
[zabbix.production]
url = "https://zabbix.example.com"
api_token = "prod-token"
read_only = true
[zabbix.staging]
url = "https://zabbix-staging.example.com"
api_token = "staging-token"
read_only = falseStart
sudo systemctl start zabbix-mcp-server
sudo systemctl enable zabbix-mcp-serverVerify the server is running:
sudo systemctl status zabbix-mcp-serverLogs
# Live log stream
tail -f /var/log/zabbix-mcp/server.log
# Via journalctl
sudo journalctl -u zabbix-mcp-server -fDocker
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
cp config.example.toml config.toml
nano config.toml # fill in your Zabbix details
docker compose up -dThe config file is mounted read-only into the container. Logs are stored in a Docker volume.
Upgrade:
git pull
docker compose up -d --buildLogs:
docker compose logs -fManual Installation (pip)
If you prefer to install manually without the deploy script:
python3 -m venv /opt/zabbix-mcp/venv
/opt/zabbix-mcp/venv/bin/pip install /path/to/zabbix-mcp-server
/opt/zabbix-mcp/venv/bin/zabbix-mcp-server --config /path/to/config.tomlConnecting AI Clients
The server uses the Streamable HTTP transport and listens on http://127.0.0.1:8080/mcp by default.
MCP (Model Context Protocol) is an open standard that lets AI assistants use external tools. Any MCP-compatible client can connect to this server - ChatGPT, VS Code, Claude, Codex, JetBrains, and others.
The MCP client configuration is the same for all clients:
{
"mcpServers": {
"zabbix": {
"url": "http://your-server:8080/mcp"
}
}
}Where to put this config depends on the client:
Client | Config location |
ChatGPT (initMAX widget) | MCP server settings in the widget configuration |
VS Code (Copilot / Continue / Cline) |
|
Claude Desktop |
|
Claude Code |
|
OpenAI Codex | MCP server settings in the Codex configuration |
JetBrains IDEs | MCP server settings in the IDE |
When auth_token is configured on the server, clients must include the bearer token in requests:
Authorization: Bearer your-secret-token-hereExample Prompts
Once connected, you can ask your AI assistant things like:
Prompt | What it does |
"Show me all current problems" | Calls |
"Which hosts are down?" | Calls |
"Acknowledge event 12345 with message 'investigating'" | Calls |
"What triggers fired in the last hour?" | Calls |
"List all hosts in group 'Linux servers'" | Calls |
"Show me CPU usage history for host 'web-01'" | Calls |
"Put host 'db-01' into maintenance for 2 hours" | Calls |
"Export the template 'Template OS Linux'" | Calls |
"How many items does host 'app-01' have?" | Calls |
"Check the health of the MCP server" | Calls |
The AI chains multiple tools automatically when needed.
Available Tools
All tools accept an optional server parameter to target a specific Zabbix instance (defaults to the first configured server).
Common Parameters (get methods)
Configuration Reference
All available options with detailed descriptions are in config.example.toml. Quick overview:
Zabbix Compatibility
The server uses the standard Zabbix JSON-RPC API. Methods not available in your Zabbix version will return an error from the Zabbix server — the MCP server itself does not enforce version checks.
Development
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
python3 -m venv .venv
source .venv/bin/activate
pip install -e .Test with MCP Inspector:
npx @modelcontextprotocol/inspector zabbix-mcp-server --config config.tomlLicense
AGPL-3.0 - see LICENSE.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/initMAX/zabbix-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server