Skip to main content
Glama
madamak

Apache Airflow MCP Server

by madamak

airflow_list_task_instances

Read-onlyIdempotent

Retrieve task instances for a specific DAG run to monitor execution status, attempt counts, and access per-attempt log URLs. Filter by state or task IDs and paginate results.

Instructions

List task instances for a DAG run (state, try_number, per-attempt log URL).

Parameters

  • instance: Instance key (optional)

  • ui_url: Airflow UI URL to resolve instance/dag/dag_run (optional)

  • dag_id: DAG identifier

  • dag_run_id: DAG run identifier

  • limit: Max results (default 100; accepts int/float/str, coerced to non-negative int, fractional values truncated)

  • offset: Offset for pagination (default 0; accepts int/float/str, coerced to non-negative int, fractional values truncated)

  • state: Optional list of task states (case-insensitive). When provided, only matching states are returned.

  • task_ids: Optional list of task identifiers to include.

Returns

  • Response dict: { "task_instances": [{ "task_id", "state", "try_number", "ui_url" }], "count": int, "total_entries"?: int, "filters"?: { "state": [...], "task_ids": [...] }, "request_id": str }

  • Raises: ToolError with compact JSON payload (code, message, request_id, optional context)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
instanceNo
ui_urlNo
dag_idNo
dag_run_idNo
limitNo
offsetNo
stateNo
task_idsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context: it specifies the return format (list with state, try_number, UI URL), mentions pagination behavior (limit/offset defaults and coercion rules), and notes error handling (ToolError with JSON payload). This goes beyond annotations without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a purpose statement, parameter details in bullet points, and return/error information. Every sentence adds value: the first sentence sets context, parameters clarify usage, and returns/errors inform outcomes. No redundant or vague language—it's front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no schema descriptions) and rich annotations, the description is highly complete. It covers purpose, parameter semantics, behavioral traits (pagination, filtering, error handling), and output details (response structure). With output schema implicitly handled by the return description, no significant gaps remain—it provides all needed context for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description carries full burden. It provides clear semantics for all 8 parameters: explains optional vs. required (e.g., 'optional' for instance/ui_url), defines purpose (e.g., 'DAG identifier'), and adds behavioral details like default values, coercion rules for limit/offset, and filtering logic for state/task_ids. This compensates well for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('task instances for a DAG run') with specific attributes (state, try_number, log URL). It distinguishes from siblings like 'airflow_get_task_instance' (singular) and 'airflow_list_dag_runs' (different resource), but doesn't explicitly contrast them. The purpose is specific and actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for listing task instances within a specific DAG run, but doesn't explicitly state when to use this tool versus alternatives like 'airflow_get_task_instance' (for a single instance) or 'airflow_list_dag_runs' (for runs instead of tasks). No exclusions or prerequisites are mentioned, leaving some ambiguity in context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/madamak/apache-airflow-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server