Skip to main content
Glama
GJakobi

Hatchet MCP Server

by GJakobi

get_queue_metrics

Monitor queue depth and job status counts for Hatchet workflows to track performance and identify bottlenecks in job processing.

Instructions

Get queue depth and job counts by status.

Args: workflow_name: Optional workflow name to filter metrics

Returns counts of jobs in each status (queued, running, completed, failed).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
workflow_nameNo

Implementation Reference

  • The `get_queue_metrics` tool handler is defined using the `@mcp.tool()` decorator in `src/hatchet_mcp/server.py`. It fetches workflow runs from the last 24 hours, optionally filtered by workflow name, and calculates counts for each status.
    @mcp.tool()
    async def get_queue_metrics(workflow_name: str | None = None) -> dict:
        """
        Get queue depth and job counts by status.
    
        Args:
            workflow_name: Optional workflow name to filter metrics
    
        Returns counts of jobs in each status (queued, running, completed, failed).
        """
        try:
            hatchet = get_hatchet_client()
            # Get runs from the last 24 hours and count by status
            params: dict[str, Any] = {
                "since": datetime.now(tz=timezone.utc) - timedelta(hours=24),
                "limit": 1000,
            }
    
            if workflow_name:
                workflows = await hatchet.workflows.aio_list()
                workflow_ids = [
                    w.metadata.id for w in (workflows.rows or [])
                    if hasattr(w, "name") and w.name == workflow_name
                ]
                if workflow_ids:
                    params["workflow_ids"] = workflow_ids
    
            runs = await hatchet.runs.aio_list(**params)
    
            # Count by status
            counts = {
                "queued": 0,
                "running": 0,
                "completed": 0,
                "failed": 0,
                "cancelled": 0,
                "total": 0,
            }
    
            for run in (runs.rows or []):
                counts["total"] += 1
                if hasattr(run, "status"):
                    status_name = run.status.value.lower() if hasattr(run.status, "value") else str(run.status).lower()
                    if status_name in counts:
                        counts[status_name] += 1
    
            return {
                "workflow_name": workflow_name or "all",
                "time_range_hours": 24,
                "counts": counts,
            }
        except Exception as e:
            return {"error": str(e)}
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the return structure (counts by status: queued, running, completed, failed) and the filtering behavior. However, it lacks safety/performance notes (e.g., whether this is cached, rate limits, or real-time vs. eventually consistent data).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses an efficient docstring format (Args/Returns) that front-loads the purpose and provides structured supplemental detail. Every sentence earns its place; the description is appropriately compact for a single-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 optional parameter, no nested objects) and lack of output schema, the description adequately compensates by specifying the return format (counts by four status categories). It is complete enough for an agent to invoke successfully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage (only types and title provided). The description compensates well by stating the parameter is 'Optional workflow name to filter metrics,' adding both semantic meaning (filtering) and cardinality (optional) that the JSON schema structure alone does not convey.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening line 'Get queue depth and job counts by status' provides a clear verb and specific resource. However, it does not explicitly differentiate from sibling tools like list_runs or get_run_status, which also deal with job status but at a per-run rather than aggregate queue level.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The Args section explains the optional workflow_name filter provides filtering capability, implying this is used for aggregate monitoring. However, it lacks explicit guidance on when to choose this over list_runs (aggregate counts vs. individual job listings) or mention of prerequisites like workflow existence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/GJakobi/hatchet-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server