Skip to main content
Glama
aptro

Superset MCP Integration

by aptro

superset_tag_objects

Retrieve tagged objects from Apache Superset by making API requests to fetch all items with assigned tags, grouped by tag for organized access.

Instructions

Get objects associated with tags

Makes a request to the /api/v1/tag/get_objects/ endpoint to retrieve all objects that have tags assigned to them.

Returns: A dictionary with tagged objects grouped by tag

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function implementing the superset_tag_objects tool. It retrieves objects associated with tags by making a GET request to the Superset API endpoint /api/v1/tag/get_objects/ using the make_api_request helper.
    async def superset_tag_objects(ctx: Context) -> Dict[str, Any]:
        """
        Get objects associated with tags
    
        Makes a request to the /api/v1/tag/get_objects/ endpoint to retrieve
        all objects that have tags assigned to them.
    
        Returns:
            A dictionary with tagged objects grouped by tag
        """
        return await make_api_request(ctx, "get", "/api/v1/tag/get_objects/")
  • main.py:1559-1559 (registration)
    The @mcp.tool() decorator registers the superset_tag_objects function as an MCP tool.
    @mcp.tool()
  • The make_api_request helper function used by the tool to perform authenticated API calls to Superset, handling token refresh, CSRF, and error management.
    async def make_api_request(
        ctx: Context,
        method: str,
        endpoint: str,
        data: Dict[str, Any] = None,
        params: Dict[str, Any] = None,
        auto_refresh: bool = True,
    ) -> Dict[str, Any]:
        """
        Helper function to make API requests to Superset
    
        Args:
            ctx: MCP context
            method: HTTP method (get, post, put, delete)
            endpoint: API endpoint (without base URL)
            data: Optional JSON payload for POST/PUT requests
            params: Optional query parameters
            auto_refresh: Whether to auto-refresh token on 401
        """
        superset_ctx: SupersetContext = ctx.request_context.lifespan_context
        client = superset_ctx.client
    
        # For non-GET requests, make sure we have a CSRF token
        if method.lower() != "get" and not superset_ctx.csrf_token:
            await get_csrf_token(ctx)
    
        async def make_request() -> httpx.Response:
            headers = {}
    
            # Add CSRF token for non-GET requests
            if method.lower() != "get" and superset_ctx.csrf_token:
                headers["X-CSRFToken"] = superset_ctx.csrf_token
    
            if method.lower() == "get":
                return await client.get(endpoint, params=params)
            elif method.lower() == "post":
                return await client.post(
                    endpoint, json=data, params=params, headers=headers
                )
            elif method.lower() == "put":
                return await client.put(endpoint, json=data, headers=headers)
            elif method.lower() == "delete":
                return await client.delete(endpoint, headers=headers)
            else:
                raise ValueError(f"Unsupported HTTP method: {method}")
    
        # Use auto_refresh if requested
        response = (
            await with_auto_refresh(ctx, make_request)
            if auto_refresh
            else await make_request()
        )
    
        if response.status_code not in [200, 201]:
            return {
                "error": f"API request failed: {response.status_code} - {response.text}"
            }
    
        return response.json()
  • The requires_auth decorator applied to the tool, ensuring authentication before execution.
    def requires_auth(
        func: Callable[..., Awaitable[Dict[str, Any]]],
    ) -> Callable[..., Awaitable[Dict[str, Any]]]:
        """Decorator to check authentication before executing a function"""
    
        @wraps(func)
        async def wrapper(ctx: Context, *args, **kwargs) -> Dict[str, Any]:
            superset_ctx: SupersetContext = ctx.request_context.lifespan_context
    
            if not superset_ctx.access_token:
                return {"error": "Not authenticated. Please authenticate first."}
    
            return await func(ctx, *args, **kwargs)
    
        return wrapper
  • The handle_api_errors decorator applied to the tool for consistent error handling.
    def handle_api_errors(
        func: Callable[..., Awaitable[Dict[str, Any]]],
    ) -> Callable[..., Awaitable[Dict[str, Any]]]:
        """Decorator to handle API errors in a consistent way"""
    
        @wraps(func)
        async def wrapper(ctx: Context, *args, **kwargs) -> Dict[str, Any]:
            try:
                return await func(ctx, *args, **kwargs)
            except Exception as e:
                # Extract function name for better error context
                function_name = func.__name__
                return {"error": f"Error in {function_name}: {str(e)}"}
    
        return wrapper
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves data (implying read-only) and returns a dictionary grouped by tag, but lacks details on permissions, rate limits, error handling, or whether it's idempotent. The description adds some context about the return format but is insufficient for a mutation-free tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: it starts with the core purpose, mentions the API endpoint, and describes the return value. Each sentence adds value without redundancy. However, it could be slightly more front-loaded by leading with the return format for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, no annotations, and no output schema, the description is minimally adequate. It explains what the tool does and the return format, but lacks behavioral details like pagination, authentication needs, or error cases. For a simple retrieval tool, it meets basic needs but leaves gaps in operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not mention any parameters, which is appropriate. It adds value by explaining the return structure ('dictionary with tagged objects grouped by tag'), compensating for the lack of an output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get objects associated with tags' and 'retrieve all objects that have tags assigned to them.' It specifies the verb ('get', 'retrieve') and resource ('objects associated with tags'), but does not explicitly differentiate it from sibling tools like 'superset_tag_list' or 'superset_tag_get_by_id', which focus on tags themselves rather than tagged objects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the API endpoint but does not specify use cases, prerequisites, or exclusions. For example, it does not clarify if this is for bulk retrieval or how it differs from other tag-related tools like 'superset_tag_object_add' or 'superset_tag_object_remove'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aptro/superset-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server