Skip to main content
Glama
mshegolev

mshegolev/prometheus-mcp

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PROMETHEUS_URLYesPrometheus server URL, e.g., https://prometheus.example.com (no trailing slash)
PROMETHEUS_TOKENNoBearer token for authentication (takes precedence over Basic auth)
PROMETHEUS_PASSWORDNoHTTP Basic auth password
PROMETHEUS_USERNAMENoHTTP Basic auth username
PROMETHEUS_SSL_VERIFYNoSet 'false' for self-signed certificatestrue

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
prometheus_list_metricsA

List all metric names known to Prometheus, with optional substring filter.

Wraps GET /api/v1/label/__name__/values. Prometheus returns all metric names at once — no pagination. Output is capped at 500 metrics after filtering, with a truncation hint when more exist.

Use this first to discover valid metric names before writing PromQL expressions for prometheus_query or prometheus_query_range.

Examples: - Use when: "What metrics does Prometheus have about HTTP requests?" → pattern='http'; read the metrics list. - Use when: "List all node_exporter metrics" → pattern='node_'. - Use when: Starting a monitoring investigation — list metrics first to discover what's instrumented, then query specific ones. - Don't use when: You already know the exact metric name and want to query its value (call prometheus_query directly — one fewer round trip). - Don't use when: You want to see current alert state (call prometheus_list_alerts).

Returns: dict with total_count / returned_count / truncated / pattern / metrics (sorted list).

prometheus_queryA

Execute an instant PromQL query against Prometheus.

Wraps GET /api/v1/query. Returns the result type (vector, scalar, matrix, string) and a list of samples each carrying labels, timestamp, and value. For vector results each element is one time series at the evaluation instant.

Examples: - Use when: "Is the payment service up right now?" → query='up{job="payment-service"}'. - Use when: "What is the current HTTP request rate?" → query='sum(rate(http_requests_total[5m])) by (job)'. - Use when: "Show me all metrics for a specific instance" → query='{instance="localhost:9090"}'. - Don't use when: You want to see how a metric changed over time (call prometheus_query_range with start/end/step). - Don't use when: You don't know the metric name yet (call prometheus_list_metrics first to discover names).

Returns: dict with query / time / result_type / result_count / data (list of samples with labels, timestamp, value).

prometheus_query_rangeA

Execute a PromQL range query returning time-series data points.

Wraps GET /api/v1/query_range. Returns one series per matching time series, each with labels and a list of [timestamp, value] pairs. Total points across all series are capped at 5000 with a truncation hint.

Prometheus may reject the query with HTTP 422 (bad_data) if the step produces too many data points (> 11,000 per series). Increase the step or narrow the time range if this happens.

Note: The Prometheus API does not support filtering by branch or commit in this endpoint — filters are expressed purely in PromQL label matchers.

Examples: - Use when: "Show me CPU usage over the last hour with 1-minute resolution" → query='rate(node_cpu_seconds_total[5m])', step='1m'. - Use when: "Graph HTTP error rate for the last 24 hours" → query='rate(http_requests_total{status=~"5.."}[5m])', start='2024-01-15T00:00:00Z', end='2024-01-16T00:00:00Z', step='5m'. - Use when: Investigating a past incident — pick the time window of the incident and use a fine step. - Don't use when: You only want the current value (call prometheus_query — faster and simpler). - Don't use when: You want alert history (call prometheus_list_alerts).

Returns: dict with query / start / end / step / result_type / series_count / total_points / truncated / data (list of series with labels, point_count, values).

prometheus_list_alertsA

List all active and pending alerts from Prometheus.

Wraps GET /api/v1/alerts. Returns every alert that Prometheus currently tracks, with labels (including alertname, severity), state (firing / pending), the time it became active, and its current value. Also returns a summary grouped by state and a count by severity label.

Examples: - Use when: "Are there any firing alerts right now?" → check firing_count and alerts where state='firing'. - Use when: "Show me all critical alerts" → filter alerts by labels.severity == 'critical'. - Use when: Checking system health during an incident — list alerts first to understand what's firing before querying metrics. - Don't use when: You want historical alert data (Prometheus stores only current state; use Alertmanager or a recording rule for history). - Don't use when: You want raw metric values (call prometheus_query or prometheus_query_range).

Returns: dict with total_count / firing_count / pending_count / state_summary / alerts (list with labels, annotations, state, active_at, value).

prometheus_list_targetsA

List Prometheus scrape targets, summarised by job and health.

Wraps GET /api/v1/targets. Returns scrape targets with job name, instance address, health status (up / down / unknown), last scrape duration in milliseconds, and any last error. Also returns a summary grouped by job and health state.

Examples: - Use when: "Which targets are currently down?" → filter targets where health='down' and check last_error. - Use when: "How many instances of the 'node-exporter' job are up?" → check job_summary for the 'node-exporter' entry. - Use when: Investigating a scrape failure — list targets for the affected job to see which instances have errors. - Don't use when: You want metric values from a target (call prometheus_query with label matchers instead). - Don't use when: You want alert status (call prometheus_list_alerts instead).

Returns: dict with state_filter / total_count / up_count / down_count / unknown_count / job_summary (per-job health counts) / targets (list with job, instance, health, last_scrape_duration_ms, last_error, labels).

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mshegolev/prometheus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server