Skip to main content
Glama
NiclasOlofsson

DBT Core MCP Server

test_models

Run dbt tests on data models to validate quality and identify issues. Test modified models, downstream dependencies, or specific selections using dbt selector syntax.

Instructions

Run dbt tests on models and sources.

State-based selection modes (uses dbt state:modified selector):

  • select_state_modified: Test only models modified since last successful run (state:modified)

  • select_state_modified_plus_downstream: Test modified + downstream dependencies (state:modified+) Note: Requires select_state_modified=True

Manual selection (alternative to state-based):

  • select: dbt selector syntax (e.g., "customers", "tag:mart", "test_type:generic")

  • exclude: Exclude specific tests

Args: select: Manual selector for tests/models to test exclude: Exclude selector select_state_modified: Use state:modified selector (changed models only) select_state_modified_plus_downstream: Extend to state:modified+ (changed + downstream) fail_fast: Stop execution on first failure

Returns: Test results with status and failures

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
selectNo
excludeNo
select_state_modifiedNo
select_state_modified_plus_downstreamNo
fail_fastNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler function for the 'test_models' tool. Executes 'dbt test' command using the BridgeRunner, supports manual selectors, state-based selection (modified models and downstream), exclude patterns, fail-fast, and progress reporting via MCP context. Parses run_results.json for detailed output.
    async def toolImpl_test_models(
        self,
        ctx: Context | None,
        select: str | None = None,
        exclude: str | None = None,
        select_state_modified: bool = False,
        select_state_modified_plus_downstream: bool = False,
        fail_fast: bool = False,
    ) -> dict[str, Any]:
        """Implementation of test_models tool."""
        # Prepare state-based selection (validates and returns selector)
        selector = await self._prepare_state_based_selection(select_state_modified, select_state_modified_plus_downstream, select)
    
        # Early return if state-based requested but no state exists
        if select_state_modified and not selector:
            return {
                "status": "success",
                "message": "No previous state found - cannot determine modifications",
                "results": [],
                "elapsed_time": 0,
            }
    
        # Build command args
        args = ["test"]
    
        # Add selector if we have one (state-based or manual)
        if selector:
            args.extend(["-s", selector, "--state", "target/state_last_run"])
        elif select:
            args.extend(["-s", select])
    
        if exclude:
            args.extend(["--exclude", exclude])
    
        if fail_fast:
            args.append("--fail-fast")
    
        # Execute with progress reporting
        logger.info(f"Running dbt tests with args: {args}")
    
        # Define progress callback if context available
        async def progress_callback(current: int, total: int, message: str) -> None:
            if ctx:
                await ctx.report_progress(progress=current, total=total, message=message)
    
        result = await self.runner.invoke(args, progress_callback=progress_callback if ctx else None)  # type: ignore
    
        if not result.success:
            error_msg = str(result.exception) if result.exception else "Tests failed"
            response = {
                "status": "error",
                "message": error_msg,
                "command": " ".join(args),
            }
            # Include dbt output for debugging
            if result.stdout:
                response["dbt_output"] = result.stdout
            if result.stderr:
                response["stderr"] = result.stderr
            return response
    
        # Parse run_results.json for details
        run_results = self._parse_run_results()
    
        return {
            "status": "success",
            "command": " ".join(args),
            "results": run_results.get("results", []),
            "elapsed_time": run_results.get("elapsed_time"),
        }
  • FastMCP registration of the 'test_models' tool using @app.tool() decorator. Defines input parameters, comprehensive docstring with usage examples, and delegates execution to the toolImpl_test_models handler.
    async def test_models(
        ctx: Context,
        select: str | None = None,
        exclude: str | None = None,
        select_state_modified: bool = False,
        select_state_modified_plus_downstream: bool = False,
        fail_fast: bool = False,
    ) -> dict[str, Any]:
        """Run dbt tests on models and sources.
    
        State-based selection modes (uses dbt state:modified selector):
        - select_state_modified: Test only models modified since last successful run (state:modified)
        - select_state_modified_plus_downstream: Test modified + downstream dependencies (state:modified+)
          Note: Requires select_state_modified=True
    
        Manual selection (alternative to state-based):
        - select: dbt selector syntax (e.g., "customers", "tag:mart", "test_type:generic")
        - exclude: Exclude specific tests
    
        Args:
            select: Manual selector for tests/models to test
            exclude: Exclude selector
            select_state_modified: Use state:modified selector (changed models only)
            select_state_modified_plus_downstream: Extend to state:modified+ (changed + downstream)
            fail_fast: Stop execution on first failure
    
        Returns:
            Test results with status and failures
        """
        await self._ensure_initialized_with_context(ctx)
        return await self.toolImpl_test_models(ctx, select, exclude, select_state_modified, select_state_modified_plus_downstream, fail_fast)
  • Shared helper function used by test_models (and run_models, build_models) to validate and construct state-based dbt selectors like 'state:modified' or 'state:modified+'. Handles conflicts with manual select and checks for state existence.
    async def _prepare_state_based_selection(
        self,
        select_state_modified: bool,
        select_state_modified_plus_downstream: bool,
        select: str | None,
    ) -> str | None:
        """Validate and prepare state-based selection.
    
        Args:
            select_state_modified: Use state:modified selector
            select_state_modified_plus_downstream: Extend to state:modified+
            select: Manual selector (conflicts with state-based)
    
        Returns:
            The dbt selector string to use ("state:modified" or "state:modified+"), or None if:
            - Not using state-based selection
            - No previous state exists (cannot determine modifications)
    
        Raises:
            ValueError: If validation fails
        """
        # Validate: hierarchical requirement
        if select_state_modified_plus_downstream and not select_state_modified:
            raise ValueError("select_state_modified_plus_downstream requires select_state_modified=True")
    
        # Validate: can't use both state-based and manual selection
        if select_state_modified and select:
            raise ValueError("Cannot use both select_state_modified* flags and select parameter")
    
        # If not using state-based selection, return None
        if not select_state_modified:
            return None
    
        # Check if state exists
        state_dir = self.project_dir / "target" / "state_last_run"  # type: ignore
        if not state_dir.exists():
            # No state - cannot determine modifications
            return None
    
        # Return selector (state exists)
        return "state:modified+" if select_state_modified_plus_downstream else "state:modified"
  • Shared helper to parse dbt's target/run_results.json after run/test/build commands. Extracts simplified results with unique_id, status, message, execution_time, and failures for consistent tool responses.
    def _parse_run_results(self) -> dict[str, Any]:
        """Parse target/run_results.json after dbt run/test/build.
    
        Returns:
            Dictionary with results array and metadata
        """
        if not self.project_dir:
            return {"results": [], "elapsed_time": 0}
    
        run_results_path = self.project_dir / "target" / "run_results.json"
        if not run_results_path.exists():
            return {"results": [], "elapsed_time": 0}
    
        try:
            with open(run_results_path) as f:
                data = json.load(f)
    
            # Simplify results for output
            simplified_results = []
            for result in data.get("results", []):
                simplified_results.append(
                    {
                        "unique_id": result.get("unique_id"),
                        "status": result.get("status"),
                        "message": result.get("message"),
                        "execution_time": result.get("execution_time"),
                        "failures": result.get("failures"),
                    }
                )
    
            return {
                "results": simplified_results,
                "elapsed_time": data.get("elapsed_time", 0),
            }
        except Exception as e:
            logger.warning(f"Failed to parse run_results.json: {e}")
            return {"results": [], "elapsed_time": 0}
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it runs tests, supports different selection modes (state-based vs. manual), and includes a 'fail_fast' option to stop on first failure. However, it doesn't mention execution time, resource usage, or error handling details, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with the core purpose, then detailing selection modes, parameters, and returns. Each sentence earns its place by providing essential information without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no annotations, but with an output schema), the description is complete. It covers the purpose, usage guidelines, parameter semantics, and behavioral aspects. The presence of an output schema means the description doesn't need to explain return values in detail, and it adequately addresses all other contextual needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all 5 parameters in detail. It clarifies the purpose of each parameter (e.g., 'select' for manual selector syntax, 'exclude' for exclusions), distinguishes between state-based and manual modes, and notes dependencies between parameters, adding significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Run dbt tests') and resources ('on models and sources'), distinguishing it from siblings like 'run_models' (which runs models) and 'analyze_impact' (which analyzes impact). It explicitly focuses on testing rather than building or analyzing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use different selection modes: state-based selection (for modified models) vs. manual selection (as an alternative). It also specifies dependencies between parameters (e.g., 'Requires select_state_modified=True' for the downstream option), helping the agent choose appropriate configurations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NiclasOlofsson/dbt-core-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server