Skip to main content
Glama
agarwalvivek29

OpenTelemetry MCP Server

list_labels

Discover available label names in Prometheus metrics to identify filtering options for data analysis and troubleshooting.

Instructions

Get all label names available in Prometheus. Use this to discover what labels you can filter by.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
metricNoOptional metric name to get labels for

Implementation Reference

  • The actual implementation of the 'list_labels' tool, which queries Prometheus for label names.
    async def list_labels(
        client: PrometheusClient,
        metric: Optional[str] = None
    ) -> Dict[str, Any]:
        """
        Get all label names in Prometheus.
        
        Args:
            client: Prometheus client
            metric: Optional metric to get labels for
            
        Returns:
            List of label names
        """
        try:
            match = [f"{{{metric}}}"] if metric else None
            result = await client.labels(match)
            
            if result.get("status") == "success":
                labels = result.get("data", [])
                return {
                    "success": True,
                    "count": len(labels),
                    "labels": labels
                }
            else:
                return {
                    "success": False,
                    "error": "Failed to fetch labels"
                }
        except Exception as e:
            logger.error(f"Error listing labels: {e}")
            return {
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves label names but doesn't cover critical aspects like whether it's read-only, if it requires authentication, rate limits, pagination, or error handling. For a tool with no annotations, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: two sentences that directly state the purpose and usage without any fluff. Each sentence earns its place by providing essential information, making it easy to parse and front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and usage but lacks details on behavior, output format, or error cases. Without annotations or output schema, more context on what the tool returns would improve completeness for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with one optional parameter 'metric' documented as 'Optional metric name to get labels for.' The description adds no additional parameter semantics beyond this, as it doesn't explain how the metric parameter affects results or provide examples. Baseline score of 3 is appropriate since the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get all label names available in Prometheus.' It specifies the verb ('Get') and resource ('label names'), and distinguishes it from siblings by focusing on label discovery rather than values, metrics, or queries. However, it doesn't explicitly differentiate from 'list_log_labels' (which handles logs vs. metrics), leaving minor ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage guidance: 'Use this to discover what labels you can filter by.' This implies it's for exploration before filtering, distinguishing it from tools like 'query_prometheus' that perform actual queries. It doesn't explicitly state when not to use it or name alternatives, but the context is sufficient for basic differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agarwalvivek29/opentelemetry-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server