Skip to main content
Glama
voducdan

metabase-mcp

by voducdan

create_model

Create a curated dataset model in Metabase by defining an SQL query and optional column metadata, description, and collection placement.

Instructions

Create a new model in Metabase.

A model is a special type of saved question that acts as a curated dataset. Models can define metadata for their columns and serve as building blocks for other questions.

Args: name: Name of the model. database_id: ID of the database to query. query: SQL query that defines the model. description: Optional description of the model. collection_id: Optional collection to place the model in. result_metadata: Optional list of column metadata dicts. Each dict can include keys like "name", "display_name", "base_type", "semantic_type", "description", and "field_ref". visualization_settings: Optional visualization configuration.

Returns: The created model object.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYes
database_idYes
queryYes
descriptionNo
collection_idNo
result_metadataNo
visualization_settingsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The `create_model` tool handler function. It creates a new model in Metabase by POSTing to /api/card with type 'model', supporting optional description, collection_id, result_metadata, and visualization_settings parameters.
    @mcp.tool
    async def create_model(
        name: str,
        database_id: int,
        query: str,
        ctx: Context,
        description: str | None = None,
        collection_id: int | None = None,
        result_metadata: list[dict[str, Any]] | None = None,
        visualization_settings: dict[str, Any] | None = None,
    ) -> dict[str, Any]:
        """
        Create a new model in Metabase.
    
        A model is a special type of saved question that acts as a curated dataset.
        Models can define metadata for their columns and serve as building blocks
        for other questions.
    
        Args:
            name: Name of the model.
            database_id: ID of the database to query.
            query: SQL query that defines the model.
            description: Optional description of the model.
            collection_id: Optional collection to place the model in.
            result_metadata: Optional list of column metadata dicts. Each dict can include
                keys like "name", "display_name", "base_type", "semantic_type",
                "description", and "field_ref".
            visualization_settings: Optional visualization configuration.
    
        Returns:
            The created model object.
        """
        try:
            await ctx.info(f"Creating new model '{name}' in database {database_id}")
    
            payload: dict[str, Any] = {
                "name": name,
                "type": "model",
                "database_id": database_id,
                "dataset_query": {
                    "database": database_id,
                    "type": "native",
                    "native": {"query": query},
                },
                "display": "table",
                "visualization_settings": visualization_settings or {},
            }
    
            if description:
                payload["description"] = description
            if collection_id is not None:
                payload["collection_id"] = collection_id
                await ctx.debug(f"Model will be placed in collection {collection_id}")
            if result_metadata is not None:
                payload["result_metadata"] = result_metadata
                await ctx.debug(f"Model will have {len(result_metadata)} column metadata entries")
    
            result = await metabase_client.request("POST", "/card", json=payload)
            await ctx.info(f"Successfully created model with ID {result.get('id')}")
    
            return result
        except Exception as e:
            error_msg = f"Error creating model: {e}"
            await ctx.error(error_msg)
            raise ToolError(error_msg) from e
  • Function signature and parameter type definitions for the create_model tool (name, database_id, query, ctx, description, collection_id, result_metadata, visualization_settings).
    @mcp.tool
    async def create_model(
        name: str,
        database_id: int,
        query: str,
        ctx: Context,
        description: str | None = None,
        collection_id: int | None = None,
        result_metadata: list[dict[str, Any]] | None = None,
        visualization_settings: dict[str, Any] | None = None,
    ) -> dict[str, Any]:
  • server.py:700-700 (registration)
    The @mcp.tool decorator on line 700 registers create_model as an MCP tool.
    async def create_model(
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It mentions the creation action and return of the model object, but does not disclose side effects, permissions, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a brief opening, model explanation, and clearly separated Args/Returns sections. It is appropriately sized, though the explanation of models could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description adequately covers purpose, parameters, and returns. However, it lacks details on prerequisites, default behavior (e.g., collection placement), and limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, but the tool description provides detailed explanations for all 7 parameters in the Args section, adding significant meaning beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Create a new model in Metabase' and explains that a model is a curated dataset, distinct from saved questions. This distinguishes it from sibling tools like 'create_card' and 'create_mongodb_card'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that models serve as curated datasets and building blocks, providing context on when to use this tool. However, it lacks explicit 'when not to use' guidance or comparisons to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/voducdan/matebase-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server