Skip to main content
Glama
nikhil-ganage

MCP Server Airflow Token

list_task_instances

Retrieve task instances for Apache Airflow DAG runs by specifying DAG ID and run ID, with optional filters for execution dates, states, durations, and other parameters to monitor workflow execution.

Instructions

List task instances by DAG ID and DAG run ID

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dag_idYes
dag_run_idYes
execution_date_gteNo
execution_date_lteNo
start_date_gteNo
start_date_lteNo
end_date_gteNo
end_date_lteNo
updated_at_gteNo
updated_at_lteNo
duration_gteNo
duration_lteNo
stateNo
poolNo
queueNo
limitNo
offsetNo

Implementation Reference

  • The main handler function for the 'list_task_instances' tool. It constructs filter parameters and calls the Airflow TaskInstanceApi to retrieve task instances, returning the response as text content.
    async def list_task_instances(
        dag_id: str,
        dag_run_id: str,
        execution_date_gte: Optional[str] = None,
        execution_date_lte: Optional[str] = None,
        start_date_gte: Optional[str] = None,
        start_date_lte: Optional[str] = None,
        end_date_gte: Optional[str] = None,
        end_date_lte: Optional[str] = None,
        updated_at_gte: Optional[str] = None,
        updated_at_lte: Optional[str] = None,
        duration_gte: Optional[float] = None,
        duration_lte: Optional[float] = None,
        state: Optional[List[str]] = None,
        pool: Optional[List[str]] = None,
        queue: Optional[List[str]] = None,
        limit: Optional[int] = None,
        offset: Optional[int] = None,
    ) -> List[Union[types.TextContent, types.ImageContent, types.EmbeddedResource]]:
        # Build parameters dictionary
        kwargs: Dict[str, Any] = {}
        if execution_date_gte is not None:
            kwargs["execution_date_gte"] = execution_date_gte
        if execution_date_lte is not None:
            kwargs["execution_date_lte"] = execution_date_lte
        if start_date_gte is not None:
            kwargs["start_date_gte"] = start_date_gte
        if start_date_lte is not None:
            kwargs["start_date_lte"] = start_date_lte
        if end_date_gte is not None:
            kwargs["end_date_gte"] = end_date_gte
        if end_date_lte is not None:
            kwargs["end_date_lte"] = end_date_lte
        if updated_at_gte is not None:
            kwargs["updated_at_gte"] = updated_at_gte
        if updated_at_lte is not None:
            kwargs["updated_at_lte"] = updated_at_lte
        if duration_gte is not None:
            kwargs["duration_gte"] = duration_gte
        if duration_lte is not None:
            kwargs["duration_lte"] = duration_lte
        if state is not None:
            kwargs["state"] = state
        if pool is not None:
            kwargs["pool"] = pool
        if queue is not None:
            kwargs["queue"] = queue
        if limit is not None:
            kwargs["limit"] = limit
        if offset is not None:
            kwargs["offset"] = offset
    
        response = task_instance_api.get_task_instances(dag_id=dag_id, dag_run_id=dag_run_id, **kwargs)
        return [types.TextContent(type="text", text=str(response.to_dict()))]
  • Local registration of the 'list_task_instances' tool as part of the get_all_functions list, which provides the function, name, description, and read-only flag for MCP tool registration.
    def get_all_functions() -> list[tuple[Callable, str, str, bool]]:
        """Return list of (function, name, description, is_read_only) tuples for registration."""
        return [
            (get_task_instance, "get_task_instance", "Get a task instance by DAG ID, task ID, and DAG run ID", True),
            (list_task_instances, "list_task_instances", "List task instances by DAG ID and DAG run ID", True),
            (
                update_task_instance,
                "update_task_instance",
                "Update a task instance by DAG ID, DAG run ID, and task ID",
                False,
            ),
        ]
  • src/main.py:56-99 (registration)
    Global registration loop in the main CLI entrypoint where tools from get_all_functions (including list_task_instances via taskinstance import) are added to the MCP app using app.add_tool.
    @click.command()
    @click.option(
        "--transport",
        type=click.Choice(["stdio", "sse"]),
        default="stdio",
        help="Transport type",
    )
    @click.option(
        "--apis",
        type=click.Choice([api.value for api in APIType]),
        default=[api.value for api in APIType],
        multiple=True,
        help="APIs to run, default is all",
    )
    @click.option(
        "--read-only",
        is_flag=True,
        help="Only expose read-only tools (GET operations, no CREATE/UPDATE/DELETE)",
    )
    def main(transport: str, apis: list[str], read_only: bool) -> None:
        from src.server import app
    
        for api in apis:
            logging.debug(f"Adding API: {api}")
            get_function = APITYPE_TO_FUNCTIONS[APIType(api)]
            try:
                functions = get_function()
            except NotImplementedError:
                continue
    
            # Filter functions for read-only mode if requested
            if read_only:
                functions = filter_functions_for_read_only(functions)
    
            for func, name, description, *_ in functions:
                app.add_tool(func, name=name, description=description)
    
        if transport == "sse":
            logging.debug("Starting MCP server for Apache Airflow with SSE transport")
            app.run(transport="sse")
        else:
            logging.debug("Starting MCP server for Apache Airflow with stdio transport")
            app.run(transport="stdio")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. It implies a read-only list operation but doesn't disclose pagination behavior (despite 'limit' and 'offset' parameters), sorting, rate limits, authentication needs, or what the output looks like. This leaves significant gaps for agent understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single sentence that directly states the tool's core function. There's no wasted language or unnecessary elaboration, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 17 parameters, no annotations, and no output schema, the description is severely incomplete. It doesn't explain the filtering logic, return format, pagination, or how the various date parameters interact. The agent would struggle to use this effectively without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but only mentions two parameters ('dag_id' and 'dag_run_id'). It ignores the other 15 parameters including date ranges, state filters, and pagination controls. This provides inadequate guidance for a tool with 17 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('task instances'), specifying filtering by 'DAG ID and DAG run ID'. It distinguishes from siblings like 'get_task_instance' (singular) and 'clear_task_instances' (destructive), but doesn't explicitly differentiate from other list-like tools such as 'get_dag_runs' or 'get_tasks'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention when to prefer 'get_task_instance' for a single instance or how it relates to 'get_dag_runs' for broader workflow context. The description only states what it does, not when it's appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nikhil-ganage/mcp-server-airflow-token'

If you have feedback or need assistance with the MCP directory API, please join our Discord server