Skip to main content
Glama
aptro

Superset MCP Integration

by aptro

superset_chart_create

Create visualizations in Apache Superset by specifying chart type, data source, and parameters to generate charts like bar, line, or pie charts for data analysis.

Instructions

Create a new chart in Superset

Makes a request to the /api/v1/chart/ POST endpoint to create a new visualization.

Args: slice_name: Name/title of the chart datasource_id: ID of the dataset or SQL table datasource_type: Type of datasource ('table' for datasets, 'query' for SQL) viz_type: Visualization type (e.g., 'bar', 'line', 'pie', 'big_number', etc.) params: Visualization parameters including metrics, groupby, time_range, etc.

Returns: A dictionary with the created chart information including its ID

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slice_nameYes
datasource_idYes
datasource_typeYes
viz_typeYes
paramsYes

Implementation Reference

  • main.py:659-694 (handler)
    The core handler function for the 'superset_chart_create' tool. It constructs a payload from the input parameters and makes a POST request to Superset's /api/v1/chart/ endpoint to create a new chart (slice). The function uses decorators for authentication (@requires_auth) and error handling (@handle_api_errors). The @mcp.tool() decorator registers this function as an MCP tool.
    @mcp.tool()
    @requires_auth
    @handle_api_errors
    async def superset_chart_create(
        ctx: Context,
        slice_name: str,
        datasource_id: int,
        datasource_type: str,
        viz_type: str,
        params: Dict[str, Any],
    ) -> Dict[str, Any]:
        """
        Create a new chart in Superset
    
        Makes a request to the /api/v1/chart/ POST endpoint to create a new visualization.
    
        Args:
            slice_name: Name/title of the chart
            datasource_id: ID of the dataset or SQL table
            datasource_type: Type of datasource ('table' for datasets, 'query' for SQL)
            viz_type: Visualization type (e.g., 'bar', 'line', 'pie', 'big_number', etc.)
            params: Visualization parameters including metrics, groupby, time_range, etc.
    
        Returns:
            A dictionary with the created chart information including its ID
        """
        payload = {
            "slice_name": slice_name,
            "datasource_id": datasource_id,
            "datasource_type": datasource_type,
            "viz_type": viz_type,
            "params": json.dumps(params),
        }
    
        return await make_api_request(ctx, "post", "/api/v1/chart/", data=payload)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the API endpoint ('/api/v1/chart/ POST'), implying a write operation, but doesn't specify authentication requirements, rate limits, error conditions, or what happens on failure. It states the return format ('dictionary with created chart information'), but lacks details on response structure or side effects. For a creation tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement, API endpoint detail, and organized parameter explanations. It uses bullet-like formatting for Args and Returns sections, making it scannable. However, the API endpoint sentence is somewhat redundant with the purpose, and the parameter descriptions could be more concise (e.g., combining datasource_id and datasource_type). Overall, it's efficient but not perfectly tight.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, nested object, no output schema, no annotations), the description is moderately complete. It covers all parameters and the return type, but lacks authentication requirements, error handling, and detailed behavioral context. Without annotations or output schema, the agent must infer missing operational details. It's adequate for basic use but insufficient for robust integration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides meaningful explanations for all 5 parameters: slice_name ('Name/title of the chart'), datasource_id ('ID of the dataset or SQL table'), datasource_type ('Type of datasource'), viz_type ('Visualization type'), and params ('Visualization parameters including metrics, groupby, time_range, etc.'). This adds crucial context beyond the bare schema, though it could elaborate on param object structure or provide examples for viz_type.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a new chart') and resource ('in Superset'), making the purpose immediately understandable. It distinguishes from siblings like superset_chart_list, superset_chart_update, and superset_chart_delete by specifying creation rather than retrieval, modification, or deletion. However, it doesn't explicitly contrast with superset_dashboard_create or other visualization creation tools, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing authentication, existing datasets), when not to use it (e.g., for updating existing charts), or direct alternatives among the many sibling tools like superset_dashboard_create or superset_saved_query_create. The agent must infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aptro/superset-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server