Skip to main content
Glama
ChrisChoTW

databricks-mcp

by ChrisChoTW

get_cluster_metrics

Retrieve CPU, memory, network, and disk metrics for Databricks clusters to monitor performance and resource utilization over time.

Instructions

Get cluster CPU/Memory/Network/Disk metrics

Data source: system.compute.node_timeline (one record per minute)

Args: cluster_id: Cluster ID start_time: Start time (ISO format), defaults to last 1 hour end_time: End time (ISO format), defaults to now limit: Max number of records to return, default 60 (1 hour)

Returns: Metrics time series and summary statistics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cluster_idYes
start_timeNo
end_timeNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The function handles the retrieval and processing of cluster metrics by executing a SQL query against system tables and calculating summary statistics.
    def get_cluster_metrics(
        ctx: Context,
        cluster_id: str,
        start_time: str = None,
        end_time: str = None,
        limit: int = 60
    ) -> Dict[str, Any]:
        """
        Get cluster CPU/Memory/Network/Disk metrics
    
        Data source: system.compute.node_timeline (one record per minute)
    
        Args:
            cluster_id: Cluster ID
            start_time: Start time (ISO format), defaults to last 1 hour
            end_time: End time (ISO format), defaults to now
            limit: Max number of records to return, default 60 (1 hour)
    
        Returns:
            Metrics time series and summary statistics
        """
        # Validate cluster_id format
        if not CLUSTER_ID_PATTERN.match(cluster_id):
            raise ToolError("Invalid cluster_id format")
    
        time_condition = f"cluster_id = '{cluster_id}'"
        if start_time:
            if not DATETIME_PATTERN.match(start_time):
                raise ToolError("Invalid start_time format. Use ISO format: YYYY-MM-DDTHH:MM:SS")
            time_condition += f" AND start_time >= '{start_time}'"
        if end_time:
            if not DATETIME_PATTERN.match(end_time):
                raise ToolError("Invalid end_time format. Use ISO format: YYYY-MM-DDTHH:MM:SS")
            time_condition += f" AND end_time <= '{end_time}'"
    
        metrics_sql = f"""
        SELECT
            start_time,
            end_time,
            instance_id,
            driver,
            node_type,
            ROUND(cpu_user_percent, 2) as cpu_user_pct,
            ROUND(cpu_system_percent, 2) as cpu_system_pct,
            ROUND(cpu_wait_percent, 2) as cpu_wait_pct,
            ROUND(cpu_user_percent + cpu_system_percent, 2) as cpu_total_pct,
            ROUND(mem_used_percent, 2) as mem_used_pct,
            ROUND(mem_swap_percent, 2) as mem_swap_pct,
            network_sent_bytes,
            network_received_bytes,
            disk_free_bytes_per_mount_point
        FROM system.compute.node_timeline
        WHERE {time_condition}
        ORDER BY start_time DESC
        LIMIT {limit}
        """
    
        ctx.info(f"Querying cluster {cluster_id} metrics...")
        metrics = execute_sql(ctx, metrics_sql)
    
        if not metrics:
            return {
                "cluster_id": cluster_id,
                "error": "No metrics data found, cluster may not be running or out of time range",
                "metrics": [],
                "summary": {}
            }
    
        cpu_totals = [float(m.get("cpu_total_pct", 0) or 0) for m in metrics]
        mem_used = [float(m.get("mem_used_pct", 0) or 0) for m in metrics]
    
        for m in metrics:
            m["time_local"] = utc_to_taipei(m.get("start_time"))
            m["start_time"] = str(m.get("start_time"))
            m["end_time"] = str(m.get("end_time"))
    
        summary = {
            "data_points": len(metrics),
            "time_range_local": {
                "start": utc_to_taipei(metrics[-1].get("start_time")) if metrics else None,
                "end": utc_to_taipei(metrics[0].get("end_time")) if metrics else None
            },
            "cpu": {
                "avg_pct": round(sum(cpu_totals) / len(cpu_totals), 2) if cpu_totals else 0,
                "max_pct": round(max(cpu_totals), 2) if cpu_totals else 0,
                "min_pct": round(min(cpu_totals), 2) if cpu_totals else 0
            },
            "memory": {
                "avg_pct": round(sum(mem_used) / len(mem_used), 2) if mem_used else 0,
                "max_pct": round(max(mem_used), 2) if mem_used else 0,
                "min_pct": round(min(mem_used), 2) if mem_used else 0
            },
            "network": {
                "total_sent_gb": round(sum(int(m.get("network_sent_bytes", 0) or 0) for m in metrics) / 1024**3, 3),
                "total_received_gb": round(sum(int(m.get("network_received_bytes", 0) or 0) for m in metrics) / 1024**3, 3)
            }
        }
    
        return {
            "cluster_id": cluster_id,
            "metrics": metrics,
            "summary": summary
        }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the data source and that it returns time series and summary statistics, which adds useful behavioral context. However, it lacks details on permissions, rate limits, error handling, or whether it's read-only/destructive, which are important for a metrics retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, data source, args, returns) and uses bullet points for readability. It's appropriately sized without fluff, though the data source detail might be slightly verbose for a pure purpose statement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no annotations, but with an output schema), the description is fairly complete. It covers purpose, parameters, and return values, and the output schema reduces the need to detail return formats. However, it could improve by addressing usage guidelines and more behavioral aspects like error cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It effectively explains all four parameters: 'cluster_id' is identified, 'start_time' and 'end_time' specify ISO format with defaults, and 'limit' defines its purpose and default. This adds significant meaning beyond the bare schema, though it could elaborate on constraints like valid time ranges.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resources ('cluster CPU/Memory/Network/Disk metrics'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'get_cluster_events' or 'get_run_task_metrics', which might also involve cluster-related data retrieval, leaving some ambiguity about when this specific tool is preferred.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the data source ('system.compute.node_timeline'), but doesn't specify use cases, prerequisites, or exclusions compared to siblings like 'get_cluster_events' or 'get_run_task_metrics', leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ChrisChoTW/databricks-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server