Skip to main content
Glama
pab1it0

Prometheus MCP Server

list_metrics

Retrieve and display all available metrics in Prometheus for monitoring and analysis, enabling efficient management of system performance data.

Instructions

List all available metrics in Prometheus

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for the 'list_metrics' tool. Fetches all available Prometheus metrics using the label/__name__/values API endpoint, applies optional filtering by pattern, supports pagination with limit/offset, and provides progress updates if context is available. Returns structured response with metrics list and pagination metadata.
    async def list_metrics( limit: Optional[int] = None, offset: int = 0, filter_pattern: Optional[str] = None, ctx: Context | None = None ) -> Dict[str, Any]: """Retrieve a list of all metric names available in Prometheus. Args: limit: Maximum number of metrics to return (default: all metrics) offset: Number of metrics to skip for pagination (default: 0) filter_pattern: Optional substring to filter metric names (case-insensitive) Returns: Dictionary containing: - metrics: List of metric names - total_count: Total number of metrics (before pagination) - returned_count: Number of metrics returned - offset: Current offset - has_more: Whether more metrics are available """ logger.info("Listing available metrics", limit=limit, offset=offset, filter_pattern=filter_pattern) # Report progress if context available if ctx: await ctx.report_progress(progress=0, total=100, message="Fetching metrics list...") data = make_prometheus_request("label/__name__/values") if ctx: await ctx.report_progress(progress=50, total=100, message=f"Processing {len(data)} metrics...") # Apply filter if provided if filter_pattern: filtered_data = [m for m in data if filter_pattern.lower() in m.lower()] logger.debug("Applied filter", original_count=len(data), filtered_count=len(filtered_data), pattern=filter_pattern) data = filtered_data total_count = len(data) # Apply pagination start_idx = offset end_idx = offset + limit if limit is not None else len(data) paginated_data = data[start_idx:end_idx] result = { "metrics": paginated_data, "total_count": total_count, "returned_count": len(paginated_data), "offset": offset, "has_more": end_idx < total_count } if ctx: await ctx.report_progress(progress=100, total=100, message=f"Retrieved {len(paginated_data)} of {total_count} metrics") logger.info("Metrics list retrieved", total_count=total_count, returned_count=len(paginated_data), offset=offset, has_more=result["has_more"]) return result
  • The @mcp.tool decorator that registers the list_metrics function as an MCP tool, providing its description and annotations including title, icon, and operational hints.
    @mcp.tool( description="List all available metrics in Prometheus with optional pagination support", annotations={ "title": "List Available Metrics", "icon": "📋", "readOnlyHint": True, "destructiveHint": False, "idempotentHint": True, "openWorldHint": True } )
  • Helper function to fetch and cache the list of available metrics from Prometheus. Prepared for future integration with FastMCP completion capabilities, though currently not used by the list_metrics handler.
    def get_cached_metrics() -> List[str]: """Get metrics list with caching to improve completion performance. This helper function is available for future completion support when FastMCP implements the completion capability. For now, it can be used internally to optimize repeated metric list requests. """ current_time = time.time() # Check if cache is valid if _metrics_cache["data"] is not None and (current_time - _metrics_cache["timestamp"]) < _CACHE_TTL: logger.debug("Using cached metrics list", cache_age=current_time - _metrics_cache["timestamp"]) return _metrics_cache["data"] # Fetch fresh metrics try: data = make_prometheus_request("label/__name__/values") _metrics_cache["data"] = data _metrics_cache["timestamp"] = current_time logger.debug("Refreshed metrics cache", metric_count=len(data)) return data except Exception as e: logger.error("Failed to fetch metrics for cache", error=str(e)) # Return cached data if available, even if expired return _metrics_cache["data"] if _metrics_cache["data"] is not None else []

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pab1it0/prometheus-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server