Skip to main content
Glama
dbt-labs
by dbt-labs

query_metrics

Query the dbt Semantic Layer to analyze business data by grouping, ordering, and filtering metrics, dimensions, and entities for precise insights.

Instructions

This tool allows ordering and grouping by dimensions and entities. To use this tool, you must first know about specific metrics, dimensions and entities to provide. You can call the list_metrics, get_dimensions, and get_entities tools to get information about which metrics, dimensions, and entities to use.

When using the order_by parameter, you must ensure that the dimension or entity also appears in the group_by parameter. When fulfilling a lookback query, prefer using order_by and limit instead of using the where parameter. A lookback query requires that the order_by parameter includes a descending order for a time dimension.

The where parameter should be database agnostic SQL syntax, however dimensions and entity are referenced differently. For categorical dimensions, use {{ Dimension('<name>') }} and for time dimensions add the grain like {{ TimeDimension('<name>', '<grain>') }}. For entities, use {{ Entity('<name>') }}. When referencing dates in the where parameter, only use the format yyyy-mm-dd.

Don't call this tool if the user's question cannot be answered with the provided metrics, dimensions, and entities. Instead, clarify what metrics, dimensions, and entities are available and suggest a new question that can be answered and is approximately the same as the user's question.

For queries that may return large amounts of data, it's recommended to use a two-step approach:

  1. First make a query with a small limit to verify the results are what you expect

  2. Then make a follow-up query without a limit (or with a larger limit) to get the full dataset

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
group_byNo
limitNo
metricsYes
order_byNo
whereNo

Implementation Reference

  • MCP tool handler function for 'query_metrics'. Decorated with @dbt_mcp_tool which defines the tool schema and metadata. Dispatches to SemanticLayerFetcher.
    @dbt_mcp_tool(
        description=get_prompt("semantic_layer/query_metrics"),
        title="Query Metrics",
        read_only_hint=True,
        destructive_hint=False,
        idempotent_hint=True,
    )
    async def query_metrics(
        context: SemanticLayerToolContext,
        metrics: list[str],
        group_by: list[GroupByParam] | None = None,
        order_by: list[OrderByParam] | None = None,
        where: str | None = None,
        limit: int | None = None,
    ) -> str:
        result = await context.semantic_layer_fetcher.query_metrics(
            metrics=metrics,
            group_by=group_by,
            order_by=order_by,
            where=where,
            limit=limit,
        )
        if isinstance(result, QueryMetricsSuccess):
            return result.result
        else:
            return result.error
  • Registration of semantic layer tools (including query_metrics) to the FastMCP server instance.
    if config.semantic_layer_config_provider:
        logger.info("Registering semantic layer tools")
        register_sl_tools(
            dbt_mcp,
            config_provider=config.semantic_layer_config_provider,
            client_provider=DefaultSemanticLayerClientProvider(
                config_provider=config.semantic_layer_config_provider,
            ),
            disabled_tools=disabled_tools,
            enabled_tools=enabled_tools,
            enabled_toolsets=enabled_toolsets,
            disabled_toolsets=disabled_toolsets,
        )
  • Core implementation of query_metrics in SemanticLayerFetcher class, which performs validation, queries the dbt Semantic Layer via SDK, formats results or errors.
    async def query_metrics(
        self,
        metrics: list[str],
        group_by: list[GroupByParam] | None = None,
        order_by: list[OrderByParam] | None = None,
        where: str | None = None,
        limit: int | None = None,
        result_formatter: Callable[[pa.Table], str] | None = None,
    ) -> QueryMetricsResult:
        validation_error = await self.validate_query_metrics_params(
            metrics=metrics,
            group_by=group_by,
        )
        if validation_error:
            return QueryMetricsError(error=validation_error)
    
        try:
            query_error = None
            sl_client = await self.client_provider.get_client()
            with sl_client.session():
                # Catching any exception within the session
                # to ensure it is closed properly
                try:
                    parsed_order_by: list[OrderBySpec] = self._get_order_bys(
                        order_by=order_by, metrics=metrics, group_by=group_by
                    )
                    query_result = await asyncio.to_thread(
                        sl_client.query,
                        metrics=metrics,
                        group_by=group_by,  # type: ignore
                        order_by=parsed_order_by,  # type: ignore
                        where=[where] if where else None,
                        limit=limit,
                    )
                except Exception as e:
                    query_error = e
            if query_error:
                return self._format_query_failed_error(query_error)
            formatter = result_formatter or DEFAULT_RESULT_FORMATTER
            json_result = formatter(query_result)
            return QueryMetricsSuccess(result=json_result or "")
        except Exception as e:
            return self._format_query_failed_error(e)
  • Type definitions used for query_metrics tool parameters (e.g., OrderByParam, GroupByParam from external) and responses (QueryMetricsSuccess/Error). Input schema derived from handler function signature.
    from dataclasses import dataclass
    
    from dbtsl.models.dimension import DimensionType
    from dbtsl.models.entity import EntityType
    from dbtsl.models.metric import MetricType
    
    
    @dataclass
    class OrderByParam:
        name: str
        descending: bool
    
    
    @dataclass
    class MetricToolResponse:
        name: str
        type: MetricType
        label: str | None = None
        description: str | None = None
        metadata: str | None = None
    
    
    @dataclass
    class DimensionToolResponse:
        name: str
        type: DimensionType
        description: str | None = None
        label: str | None = None
        granularities: list[str] | None = None
    
    
    @dataclass
    class EntityToolResponse:
        name: str
        type: EntityType
        description: str | None = None
    
    
    @dataclass
    class SavedQueryToolResponse:
        name: str
        label: str | None = None
        description: str | None = None
        metrics: list[str] | None = None
        group_by: list[str] | None = None
        where: list[str] | None = None
    
    
    @dataclass
    class QueryMetricsSuccess:
        result: str
        error: None = None
    
    
    @dataclass
    class QueryMetricsError:
        error: str
        result: None = None
    
    
    QueryMetricsResult = QueryMetricsSuccess | QueryMetricsError
    
    
    @dataclass
    class GetMetricsCompiledSqlSuccess:
        sql: str
        error: None = None
    
    
    @dataclass
    class GetMetricsCompiledSqlError:
        error: str
        sql: None = None
    
    
    GetMetricsCompiledSqlResult = GetMetricsCompiledSqlSuccess | GetMetricsCompiledSqlError
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so comprehensively. It describes prerequisites (must know specific metrics/dimensions/entities), constraints (order_by requires group_by inclusion), best practices (two-step approach for large data sets), syntax requirements (database-agnostic SQL with specific templating), and date format restrictions. It also explains what constitutes a valid lookback query.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (instructions, examples, parameters) but is quite lengthy. While every sentence adds value, the extensive examples section could be more concise. The core instructions are front-loaded and efficiently written, but the overall length might be excessive for some use cases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (5 parameters, 0% schema coverage, no annotations, no output schema), the description provides complete contextual information. It covers prerequisites, parameter usage, constraints, best practices, error cases, and provides multiple detailed examples. The description fully compensates for the lack of structured metadata and prepares an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing rich semantic context for all parameters. It explains what 'metrics' should contain, how 'group_by' works with dimensions and entities, the relationship between 'order_by' and 'group_by', specific syntax for the 'where' parameter including templating requirements, and guidance on using 'limit' for verification queries. The examples demonstrate practical parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Queries the dbt Semantic Layer to answer business questions from the data warehouse.' It specifies the action (queries), target (dbt Semantic Layer), and outcome (answer business questions). It also distinguishes from siblings by mentioning specific prerequisite tools (list_metrics, get_dimensions, get_entities) that must be called first.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states: 'Don't call this tool if the user's question cannot be answered with the provided metrics, dimensions, and entities. Instead, clarify what metrics, dimensions, and entities are available and suggest a new question.' It also provides detailed examples of when to use it for different types of business questions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dbt-labs/dbt-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server