Skip to main content
Glama

get_task_runs

Retrieve task runs from Prefect workflows with filtering options for task name, state, tags, and time ranges to monitor and analyze workflow execution.

Instructions

Get a list of task runs with optional filtering.

Args: limit: Maximum number of task runs to return offset: Number of task runs to skip task_name: Filter by task name state_type: Filter by state type (e.g., "RUNNING", "COMPLETED", "FAILED") state_name: Filter by state name tags: Filter by tags start_time_before: ISO formatted datetime string start_time_after: ISO formatted datetime string

Returns: A list of task runs with their details

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNo
offsetNo
start_time_afterNo
start_time_beforeNo
state_nameNo
state_typeNo
tagsNo
task_nameNo

Implementation Reference

  • The get_task_runs tool handler decorated with @mcp.tool, implementing the logic to fetch and filter Prefect task runs using the Prefect client, adding UI links, and returning as TextContent.
    @mcp.tool
    async def get_task_runs(
        limit: Optional[int] = None,
        offset: Optional[int] = None,
        task_name: Optional[str] = None,
        state_type: Optional[str] = None,
        state_name: Optional[str] = None,
        tags: Optional[List[str]] = None,
        start_time_before: Optional[str] = None,
        start_time_after: Optional[str] = None,
    ) -> List[Union[types.TextContent, types.ImageContent, types.EmbeddedResource]]:
        """
        Get a list of task runs with optional filtering.
        
        Args:
            limit: Maximum number of task runs to return
            offset: Number of task runs to skip
            task_name: Filter by task name
            state_type: Filter by state type (e.g., "RUNNING", "COMPLETED", "FAILED")
            state_name: Filter by state name
            tags: Filter by tags
            start_time_before: ISO formatted datetime string
            start_time_after: ISO formatted datetime string
            
        Returns:
            A list of task runs with their details
        """
        async with get_client() as client:
            # Build filter objects
            task_run_filter = None
            filter_components = []
            
            if task_name:
                filter_components.append(
                    TaskRunFilterName(like_=f"%{task_name}%")
                )
            
            if state_type:
                filter_components.append(
                    TaskRunFilterState(
                        type=TaskRunFilterStateType(any_=[state_type.upper()])
                    )
                )
            
            if state_name:
                filter_components.append(
                    TaskRunFilterState(
                        name=TaskRunFilterStateName(any_=[state_name])
                    )
                )
            
            if tags:
                filter_components.append(
                    TaskRunFilterTags(all_=tags)
                )
            
            if start_time_after or start_time_before:
                start_time_filter_args = {}
                if start_time_after:
                    start_time_filter_args["after_"] = datetime.fromisoformat(start_time_after)
                if start_time_before:
                    start_time_filter_args["before_"] = datetime.fromisoformat(start_time_before)
                filter_components.append(
                    TaskRunFilterStartTime(**start_time_filter_args)
                )
            
            # Combine filters if any exist
            if filter_components:
                # Create TaskRunFilter with the components
                # Note: You may need to adjust this based on how TaskRunFilter combines filters
                task_run_filter = TaskRunFilter()
                for component in filter_components:
                    if isinstance(component, TaskRunFilterName):
                        task_run_filter.name = component
                    elif isinstance(component, TaskRunFilterState):
                        task_run_filter.state = component
                    elif isinstance(component, TaskRunFilterTags):
                        task_run_filter.tags = component
                    elif isinstance(component, TaskRunFilterStartTime):
                        task_run_filter.start_time = component
            
            task_runs = await client.read_task_runs(
                task_run_filter=task_run_filter,
                limit=limit,
                offset=offset or 0
            )
            
            # Add UI links to each task run
            task_runs_result = {
                "task_runs": [
                    {
                        **task_run.model_dump(),
                        "ui_url": get_task_run_url(str(task_run.id))
                    }
                    for task_run in task_runs
                ]
            }
            
            return [types.TextContent(type="text", text=str(task_runs_result))]
  • Helper function used by get_task_runs to generate UI URLs for task runs.
    def get_task_run_url(task_run_id: str) -> str:
        base_url = PREFECT_API_URL.replace("/api", "")
        return f"{base_url}/task-runs/{task_run_id}"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions optional filtering and returns a list with details, but lacks critical information about pagination behavior (beyond limit/offset parameters), rate limits, authentication requirements, error conditions, or whether this is a read-only operation. For a list tool with 8 parameters, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear opening sentence followed by organized Args and Returns sections. It's appropriately sized for an 8-parameter tool, though the 'Returns' section could be more specific about what 'details' includes. No wasted sentences, but could be slightly more front-loaded with key usage information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no annotations, no output schema), the description provides basic parameter documentation but lacks important context. It doesn't explain the relationship between state_type and state_name, doesn't specify whether filters are AND/OR combined, and provides minimal information about the return structure. For a filtering/list tool, this leaves the agent with significant uncertainty about how to effectively use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description provides valuable parameter semantics by listing all 8 parameters with brief explanations. It clarifies filtering capabilities (task_name, state_type, state_name, tags, time ranges) and pagination parameters (limit, offset). However, it doesn't explain parameter interactions, default behaviors, or format specifics (e.g., what 'ISO formatted datetime string' means for the time parameters).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Get a list of task runs with optional filtering,' which is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'get_task_run' (singular) or 'get_task_runs_by_flow_run,' leaving some ambiguity about when to choose this tool over alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'get_task_run' (singular) and 'get_task_runs_by_flow_run' available, there's no indication of when this general list tool is preferred over more specific ones, nor any mention of prerequisites or constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/allen-munsch/mcp-prefect'

If you have feedback or need assistance with the MCP directory API, please join our Discord server