Skip to main content
Glama

get_flow_runs_by_flow

Retrieve flow runs for a specific workflow in Prefect, with options to filter by state type, limit results, and paginate through large datasets.

Instructions

Get flow runs for a specific flow.

Args: flow_id: The flow UUID limit: Maximum number of flow runs to return offset: Number of flow runs to skip state_type: Filter by state type (e.g., "RUNNING", "COMPLETED", "FAILED")

Returns: A list of flow runs for the specified flow

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
flow_idYes
limitNo
offsetNo
state_typeNo

Implementation Reference

  • The main handler function for the 'get_flow_runs_by_flow' tool. It uses the Prefect client to query flow runs filtered by flow_id and optional state_type, adds UI links, and returns the result as text content.
    @mcp.tool
    async def get_flow_runs_by_flow(
        flow_id: str,
        limit: Optional[int] = None,
        offset: Optional[int] = None,
        state_type: Optional[str] = None,
    ) -> List[Union[types.TextContent, types.ImageContent, types.EmbeddedResource]]:
        """
        Get flow runs for a specific flow.
        
        Args:
            flow_id: The flow UUID
            limit: Maximum number of flow runs to return
            offset: Number of flow runs to skip
            state_type: Filter by state type (e.g., "RUNNING", "COMPLETED", "FAILED")
            
        Returns:
            A list of flow runs for the specified flow
        """
        async with get_client() as client:
            # Build filter parameters
            filters = {"flow_id": {"eq_": UUID(flow_id)}}
            if state_type:
                filters["state"] = {"type": {"any_": [state_type.upper()]}}
            
            flow_runs = await client.read_flow_runs(
                limit=limit,
                offset=offset,
                **filters
            )
            
            # Add UI links to each flow run
            flow_runs_result = {
                "flow_runs": [
                    {
                        **flow_run.dict(),
                        "ui_url": get_flow_run_url(str(flow_run.id))
                    }
                    for flow_run in flow_runs
                ]
            }
            
            return [types.TextContent(type="text", text=str(flow_runs_result))]
  • Conditional import of the flow_run module in main.py, which triggers registration of all @mcp.tool functions in flow_run.py, including 'get_flow_runs_by_flow'.
    if APIType.FLOW_RUN.value in apis:
        info("Loading Flow Run API...")
        from . import flow_run
  • Helper function used by get_flow_runs_by_flow (and other flow run tools) to generate UI URLs for flow runs.
    def get_flow_run_url(flow_run_id: str) -> str:
        base_url = PREFECT_API_URL.replace("/api", "")
        return f"{base_url}/flow-runs/{flow_run_id}"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the tool 'Get's data (implying read-only) and returns a list, but doesn't disclose behavioral traits like pagination behavior (limit/offset usage), rate limits, authentication needs, error handling, or whether it's a safe operation. For a tool with 4 parameters and no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose clearly, followed by structured sections for Args and Returns. Every sentence earns its place, with no redundant information. It could be slightly more concise by integrating the Args into the main text, but the structure is effective and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no annotations, no output schema), the description is partially complete. It covers the purpose and parameters well, but lacks details on behavioral aspects (e.g., pagination, errors) and output specifics beyond 'A list of flow runs.' For a read operation with filtering and pagination, more context on how results are structured or limited would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics for all 4 parameters: flow_id is explained as 'The flow UUID,' limit as 'Maximum number of flow runs to return,' offset as 'Number of flow runs to skip,' and state_type as 'Filter by state type' with examples. This goes well beyond the bare schema, providing clear context for each parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get flow runs for a specific flow.' It specifies the verb ('Get') and resource ('flow runs'), and distinguishes it from the sibling tool 'get_flow_runs' (which presumably gets all flow runs, not filtered by a specific flow). However, it doesn't explicitly contrast with 'get_flow_run' (singular) or other filtering tools, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying it's for 'a specific flow,' suggesting it should be used when you have a flow_id and want its runs. However, it doesn't explicitly state when to use this versus alternatives like 'get_flow_runs' (which might get all runs) or 'get_flow_run' (singular), nor does it mention prerequisites or exclusions. This leaves some ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/allen-munsch/mcp-prefect'

If you have feedback or need assistance with the MCP directory API, please join our Discord server