Skip to main content
Glama
initMAX

Zabbix MCP Server

by initMAX


What is this?

MCP (Model Context Protocol) is an open standard that lets AI assistants (ChatGPT, Claude, VS Code Copilot, JetBrains AI, Codex, and others) use external tools. This server exposes the entire Zabbix API as MCP tools — allowing any compatible AI assistant to query hosts, check problems, manage templates, acknowledge events, and perform any other Zabbix operation.

The server runs as a standalone HTTP service. AI clients connect to it over the network.

Features

  • Complete API coverage - All 57 Zabbix API groups (219 tools): hosts, problems, triggers, templates, users, dashboards, and more

  • Multi-server support - Connect to multiple Zabbix instances (production, staging, ...) with separate tokens

  • Single config file - One TOML file, no scattered environment variables

  • Read-only mode - Per-server write protection to prevent accidental changes

  • Auto-reconnect - Transparent re-authentication on session expiry

  • Production-ready - systemd service, logrotate, security hardening

  • Generic fallback - zabbix_raw_api_call tool for any API method not explicitly defined

Quick Start

git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
sudo ./deploy/install.sh
sudo nano /etc/zabbix-mcp/config.toml   # fill in your Zabbix URL + API token
sudo systemctl start zabbix-mcp-server
sudo systemctl enable zabbix-mcp-server

Done. The server is running on http://127.0.0.1:8080/mcp.

Installation

Requirements

Install

git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
sudo ./deploy/install.sh

The install script will:

  1. Create a dedicated system user zabbix-mcp (no login shell)

  2. Create a Python virtual environment in /opt/zabbix-mcp/venv

  3. Install the server and all dependencies

  4. Copy the example config to /etc/zabbix-mcp/config.toml

  5. Install a systemd service unit (zabbix-mcp-server)

  6. Set up logrotate for /var/log/zabbix-mcp/*.log (daily, 30 days retention)

Upgrade

cd zabbix-mcp-server
git pull
sudo ./deploy/install.sh update

The update command will upgrade the package to the latest version, refresh the systemd unit and logrotate config, and restart the service if it is running.

Configure

Edit the config file with your Zabbix server details:

sudo nano /etc/zabbix-mcp/config.toml

Minimal configuration - just fill in your Zabbix URL and API token:

[server]
transport = "http"
host = "127.0.0.1"
port = 8080

[zabbix.production]
url = "https://zabbix.example.com"
api_token = "your-api-token"
read_only = true
verify_ssl = true

All available options with detailed descriptions are documented in config.example.toml.

Authentication

The HTTP endpoint can be protected with a bearer token. There are two ways to configure it:

Option 1 - token directly in config:

[server]
auth_token = "your-secret-token-here"

Option 2 - token from environment variable (recommended for production):

[server]
auth_token = "${MCP_AUTH_TOKEN}"

When auth_token is set, all clients must include it in the Authorization header:

Authorization: Bearer your-secret-token-here

When auth_token is not set, the server accepts unauthenticated connections. This is only safe when the server is bound to 127.0.0.1 (default).

Multiple Zabbix servers

You can connect to multiple Zabbix instances. Each tool has a server parameter to select which one to use (defaults to the first defined):

[zabbix.production]
url = "https://zabbix.example.com"
api_token = "prod-token"
read_only = true

[zabbix.staging]
url = "https://zabbix-staging.example.com"
api_token = "staging-token"
read_only = false

Start

sudo systemctl start zabbix-mcp-server
sudo systemctl enable zabbix-mcp-server

Verify the server is running:

sudo systemctl status zabbix-mcp-server

Logs

# Live log stream
tail -f /var/log/zabbix-mcp/server.log

# Via journalctl
sudo journalctl -u zabbix-mcp-server -f

Docker

git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
cp config.example.toml config.toml
nano config.toml                        # fill in your Zabbix details
docker compose up -d

The config file is mounted read-only into the container. Logs are stored in a Docker volume.

Upgrade:

git pull
docker compose up -d --build

Logs:

docker compose logs -f

Manual Installation (pip)

If you prefer to install manually without the deploy script:

python3 -m venv /opt/zabbix-mcp/venv
/opt/zabbix-mcp/venv/bin/pip install /path/to/zabbix-mcp-server
/opt/zabbix-mcp/venv/bin/zabbix-mcp-server --config /path/to/config.toml

Connecting AI Clients

The server uses the Streamable HTTP transport and listens on http://127.0.0.1:8080/mcp by default.

MCP (Model Context Protocol) is an open standard that lets AI assistants use external tools. Any MCP-compatible client can connect to this server - ChatGPT, VS Code, Claude, Codex, JetBrains, and others.

The MCP client configuration is the same for all clients:

{
  "mcpServers": {
    "zabbix": {
      "url": "http://your-server:8080/mcp"
    }
  }
}

Where to put this config depends on the client:

Client

Config location

ChatGPT (initMAX widget)

MCP server settings in the widget configuration

VS Code (Copilot / Continue / Cline)

.vscode/mcp.json or extension settings

Claude Desktop

~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows)

Claude Code

.mcp.json in project root or ~/.claude/settings.json for global

OpenAI Codex

MCP server settings in the Codex configuration

JetBrains IDEs

MCP server settings in the IDE

When auth_token is configured on the server, clients must include the bearer token in requests:

Authorization: Bearer your-secret-token-here

Example Prompts

Once connected, you can ask your AI assistant things like:

Prompt

What it does

"Show me all current problems"

Calls problem_get to list active alerts

"Which hosts are down?"

Calls host_get with status filter

"Acknowledge event 12345 with message 'investigating'"

Calls event_acknowledge

"What triggers fired in the last hour?"

Calls trigger_get with time filter and only_true

"List all hosts in group 'Linux servers'"

Calls hostgroup_get then host_get with group filter

"Show me CPU usage history for host 'web-01'"

Calls host_get, item_get, then history_get

"Put host 'db-01' into maintenance for 2 hours"

Calls maintenance_create

"Export the template 'Template OS Linux'"

Calls configuration_export

"How many items does host 'app-01' have?"

Calls item_get with countOutput

"Check the health of the MCP server"

Calls health_check

The AI chains multiple tools automatically when needed.

Available Tools

All tools accept an optional server parameter to target a specific Zabbix instance (defaults to the first configured server).

Common Parameters (get methods)

Configuration Reference

All available options with detailed descriptions are in config.example.toml. Quick overview:

Zabbix Compatibility

The server uses the standard Zabbix JSON-RPC API. Methods not available in your Zabbix version will return an error from the Zabbix server — the MCP server itself does not enforce version checks.

Development

git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
python3 -m venv .venv
source .venv/bin/activate
pip install -e .

Test with MCP Inspector:

npx @modelcontextprotocol/inspector zabbix-mcp-server --config config.toml

License

AGPL-3.0 - see LICENSE.



-
security - not tested
A
license - permissive license
-
quality - not tested

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/initMAX/zabbix-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server