Skip to main content
Glama

get_dag_run

Retrieve detailed information about a specific Apache Airflow DAG run execution in Amazon MWAA environments, including state and timing data.

Instructions

Get details about a specific DAG run.

Args: environment_name: Name of the MWAA environment dag_id: The DAG ID dag_run_id: The DAG run ID

Returns: Dictionary containing DAG run details including state and timing

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
environment_nameYes
dag_idYes
dag_run_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The actual implementation of the get_dag_run tool logic, which interacts with the Airflow API.
    async def get_dag_run(
        self, environment_name: str, dag_id: str, dag_run_id: str
    ) -> Dict[str, Any]:
        """Get DAG run details via Airflow API."""
        return self._invoke_airflow_api(
            environment_name, "GET", f"/dags/{dag_id}/dagRuns/{dag_run_id}"
        )
  • The registration of the get_dag_run MCP tool, which acts as a wrapper calling the implementation in tools.py.
    @mcp.tool(name="get_dag_run")
    async def get_dag_run(
        environment_name: str,
        dag_id: str,
        dag_run_id: str,
    ) -> Dict[str, Any]:
        """Get details about a specific DAG run.
    
        Args:
            environment_name: Name of the MWAA environment
            dag_id: The DAG ID
            dag_run_id: The DAG run ID
    
        Returns:
            Dictionary containing DAG run details including state and timing
        """
        return await tools.get_dag_run(environment_name, dag_id, dag_run_id)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the return format (Dictionary with state/timing), but this is redundant given the tool has an output schema. Critically, it omits error behavior (what happens if the DAG run doesn't exist?), authentication requirements, and does not explicitly confirm the read-only nature of the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear Args and Returns sections. Front-loaded with the core purpose. Slightly tautological param descriptions ('The DAG ID') but overall compact and readable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic usage: documents all required params (necessary due to 0% schema coverage) and mentions return content. However, given the lack of annotations, it should disclose error handling and authorization needs, which are missing. Also lacks explicit differentiation from sibling list/trigger tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% description coverage. The Args section in the description compensates by documenting all 3 parameters (environment_name, dag_id, dag_run_id), providing at least minimal semantic context for each (e.g., 'Name of the MWAA environment').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Get' and resource 'DAG run', and uses 'specific' to implicitly distinguish from the sibling tool 'list_dag_runs'. However, it does not explicitly contrast with siblings like 'trigger_dag_run' or name them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus 'list_dag_runs' (which retrieves multiple runs) or what prerequisites are needed (e.g., knowing the specific dag_run_id). No explicit when/when-not guidance is present.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/paschmaria/mwaa-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server