Skip to main content
Glama
nikhil-ganage

MCP Server Airflow Token

get_datasets

Retrieve and list datasets from Apache Airflow deployments with filtering options for DAGs, URI patterns, and pagination controls.

Instructions

List datasets

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNo
offsetNo
order_byNo
uri_patternNo
dag_idsNo

Implementation Reference

  • The main handler function for the 'get_datasets' tool. It accepts optional parameters for filtering datasets, builds a kwargs dict, calls the underlying Airflow DatasetApi.get_datasets, and returns the response as a TextContent object.
    async def get_datasets(
        limit: Optional[int] = None,
        offset: Optional[int] = None,
        order_by: Optional[str] = None,
        uri_pattern: Optional[str] = None,
        dag_ids: Optional[str] = None,
    ) -> List[Union[types.TextContent, types.ImageContent, types.EmbeddedResource]]:
        # Build parameters dictionary
        kwargs: Dict[str, Any] = {}
        if limit is not None:
            kwargs["limit"] = limit
        if offset is not None:
            kwargs["offset"] = offset
        if order_by is not None:
            kwargs["order_by"] = order_by
        if uri_pattern is not None:
            kwargs["uri_pattern"] = uri_pattern
        if dag_ids is not None:
            kwargs["dag_ids"] = dag_ids
    
        response = dataset_api.get_datasets(**kwargs)
        return [types.TextContent(type="text", text=str(response.to_dict()))]
  • The get_all_functions() in dataset.py defines and returns the list of dataset-related tools for registration, including the specific tuple for 'get_datasets'.
    def get_all_functions() -> list[tuple[Callable, str, str, bool]]:
        """Return list of (function, name, description, is_read_only) tuples for registration."""
        return [
            (get_datasets, "get_datasets", "List datasets", True),
            (get_dataset, "get_dataset", "Get a dataset by URI", True),
            (get_dataset_events, "get_dataset_events", "Get dataset events", True),
            (create_dataset_event, "create_dataset_event", "Create dataset event", False),
            (get_dag_dataset_queued_event, "get_dag_dataset_queued_event", "Get a queued Dataset event for a DAG", True),
            (get_dag_dataset_queued_events, "get_dag_dataset_queued_events", "Get queued Dataset events for a DAG", True),
            (
                delete_dag_dataset_queued_event,
                "delete_dag_dataset_queued_event",
                "Delete a queued Dataset event for a DAG",
                False,
            ),
            (
                delete_dag_dataset_queued_events,
                "delete_dag_dataset_queued_events",
                "Delete queued Dataset events for a DAG",
                False,
            ),
            (get_dataset_queued_events, "get_dataset_queued_events", "Get queued Dataset events for a Dataset", True),
            (
                delete_dataset_queued_events,
                "delete_dataset_queued_events",
                "Delete queued Dataset events for a Dataset",
                False,
            ),
        ]
  • src/main.py:78-92 (registration)
    The main registration loop in main.py that imports get_all_functions from dataset.py (via get_dataset_functions) and calls app.add_tool for each tool, including 'get_datasets', making it available in the MCP server.
    for api in apis:
        logging.debug(f"Adding API: {api}")
        get_function = APITYPE_TO_FUNCTIONS[APIType(api)]
        try:
            functions = get_function()
        except NotImplementedError:
            continue
    
        # Filter functions for read-only mode if requested
        if read_only:
            functions = filter_functions_for_read_only(functions)
    
        for func, name, description, *_ in functions:
            app.add_tool(func, name=name, description=description)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. 'List datasets' gives minimal information - it suggests a read operation but doesn't disclose pagination behavior (despite limit/offset parameters), authentication requirements, rate limits, error conditions, or what format the datasets are returned in. For a tool with 5 parameters and no annotation coverage, this is completely inadequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just two words. While this represents under-specification rather than ideal conciseness, it's not verbose or poorly structured. Every word earns its place, though more words would be beneficial for this complex tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no annotations, no output schema), the description is completely inadequate. It doesn't explain what the tool returns, how to interpret parameters, behavioral characteristics, or when to use it versus sibling tools. For a list operation with filtering and pagination capabilities, this minimal description leaves the agent guessing about fundamental aspects of tool usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage (all parameters have only titles like 'Limit', 'Offset', etc.), the description provides zero information about any parameters. It doesn't mention that filtering by uri_pattern or dag_ids is possible, nor explain what order_by expects, or how limit/offset work together. The description fails completely to compensate for the schema's lack of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List datasets' is a tautology that essentially restates the tool name 'get_datasets'. While it indicates a listing/retrieval action, it doesn't specify what kind of datasets, from what system, or what scope. Compared to sibling tools like 'get_dataset' (singular) or 'get_dataset_events', this description fails to distinguish itself meaningfully.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides absolutely no guidance about when to use this tool versus alternatives. With sibling tools like 'get_dataset' (singular), 'get_dataset_events', and 'get_dataset_queued_events', there's no indication of when this list operation is appropriate versus those more specific retrieval tools. No context, prerequisites, or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nikhil-ganage/mcp-server-airflow-token'

If you have feedback or need assistance with the MCP directory API, please join our Discord server