Skip to main content
Glama
mshegolev

mshegolev/prometheus-mcp

prometheus_list_metrics

Read-onlyIdempotent

List all Prometheus metric names, filtered by an optional substring, to discover valid metrics before running PromQL queries.

Instructions

List all metric names known to Prometheus, with optional substring filter.

Wraps GET /api/v1/label/__name__/values. Prometheus returns all metric names at once — no pagination. Output is capped at 500 metrics after filtering, with a truncation hint when more exist.

Use this first to discover valid metric names before writing PromQL expressions for prometheus_query or prometheus_query_range.

Examples: - Use when: "What metrics does Prometheus have about HTTP requests?" → pattern='http'; read the metrics list. - Use when: "List all node_exporter metrics" → pattern='node_'. - Use when: Starting a monitoring investigation — list metrics first to discover what's instrumented, then query specific ones. - Don't use when: You already know the exact metric name and want to query its value (call prometheus_query directly — one fewer round trip). - Don't use when: You want to see current alert state (call prometheus_list_alerts).

Returns: dict with total_count / returned_count / truncated / pattern / metrics (sorted list).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
patternNoOptional substring filter applied case-insensitively to metric names. Example: 'http' returns all metrics containing 'http' in their name. Leave empty to list all metrics (capped at 500).

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
total_countYes
returned_countYes
truncatedYes
patternYes
metricsYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark as read-only and idempotent; description adds critical behavioral details: no pagination, 500-metric cap after filtering, truncation hint, and exact return structure fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured and front-loaded: single sentence for core action, then API detail, then usage guidelines with examples, then don't-use cases, then return format. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, behavior (no pagination, cap), parameter, output structure, and usage context. With output schema signaled, description is sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear description of pattern parameter; description does not add significant new parameter semantics beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it lists all metric names known to Prometheus with optional substring filter. Differentiates from siblings by explaining its role as a discovery tool before querying, with specific sibling mentions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use examples (discovering metric names before PromQL) and when-not-to-use examples (when exact metric known or alert state needed), with specific alternative tools named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mshegolev/prometheus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server