Skip to main content
Glama
us-all
by us-all

dbt-get-run-results

Retrieve per-node results from any dbt run, filtered by status and limited to a maximum count. Use without an invocation ID to access the latest run results.

Instructions

Get per-node results from a specific dbt invocation (or the latest run if invocationId omitted)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
invocationIdNoinvocation_id from a run; if omitted, the latest run_results.json in target/ is used
statusNoFilter results by status (pass | error | fail | skipped | runtime error | success)
limitNo

Implementation Reference

  • src/index.ts:87-87 (registration)
    Registration of the 'dbt-get-run-results' tool using the MCP server's tool() helper, with schema from dbtGetRunResultsSchema and handler via wrapToolHandler(dbtGetRunResults).
    tool("dbt-get-run-results", "Get per-node results from a specific dbt invocation (or the latest run if invocationId omitted)", dbtGetRunResultsSchema.shape, wrapToolHandler(dbtGetRunResults));
  • Zod schema for dbt-get-run-results: optional invocationId, optional status filter, and limit (default 500, max 5000).
    export const dbtGetRunResultsSchema = z.object({
      invocationId: z
        .string()
        .optional()
        .describe("invocation_id from a run; if omitted, the latest run_results.json in target/ is used"),
      status: z
        .string()
        .optional()
        .describe("Filter results by status (pass | error | fail | skipped | runtime error | success)"),
      limit: z.coerce.number().int().min(1).max(5000).default(500),
    });
  • Main handler function dbtGetRunResults that loads run results by invocationId (looks up in run history) or from latest run_results.json, applies optional status filter, limits results, and returns metadata + mapped results.
    export async function dbtGetRunResults(
      args: z.infer<typeof dbtGetRunResultsSchema>,
    ): Promise<unknown> {
      let runFile;
      if (args.invocationId) {
        const all = listRunHistory(200);
        const match = all.find((r) => r.invocationId === args.invocationId);
        if (!match) throw new Error(`Run not found for invocation_id=${args.invocationId}`);
        runFile = {
          metadata: { generated_at: match.generatedAt, invocation_id: match.invocationId, dbt_schema_version: "" },
          results: match.results,
        };
      } else {
        runFile = loadRunResults();
      }
      let results = runFile.results;
      if (args.status) results = results.filter((r) => r.status === args.status);
      results = results.slice(0, args.limit);
      return {
        metadata: runFile.metadata,
        count: results.length,
        results: results.map((r) => ({
          uniqueId: r.unique_id,
          status: r.status,
          executionTime: r.execution_time,
          failures: r.failures,
          message: r.message,
          adapterResponse: r.adapter_response,
        })),
      };
    }
  • loadRunResults() helper that reads and caches run_results.json from the dbt target directory.
    export function loadRunResults(): DbtRunResultsFile {
      return readWithCache<DbtRunResultsFile>("runResults", targetPath("run_results.json"));
    }
  • Type definition DbtRunResultsFile and DbtRunResult used by the tool for parsing run results data.
    export interface DbtRunResultsFile {
      metadata: {
        dbt_schema_version: string;
        generated_at: string;
        invocation_id?: string;
      };
      results: DbtRunResult[];
      elapsed_time?: number;
      args?: Record<string, unknown>;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It only says 'Get per-node results', which is a read operation, but fails to disclose any behavioral traits like whether results are cached, permissions required, or if the operation is side-effect free. The agent cannot infer safety or side effects from this description alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no filler, front-loaded with the core purpose. Every part earns its place. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should mention that the tool returns per-node results (object structure). It also lacks behavioral context. For a 3-param tool with no annotations, it is minimally complete but has gaps in return value and behavior disclosure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67% (2 of 3 params have descriptions). Description adds minimal value: it reiterates that invocationId is optional and defaults to latest, and clarifies the status filter values. The limit parameter has no additional semantics beyond the schema defaults. Baseline 3 is appropriate since schema does most of the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Get', resource 'per-node results', and scope 'from a specific dbt invocation (or the latest run)'. This distinguishes it from siblings like dbt-list-runs (which lists runs) and dbt-list-models (which lists models). No ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to omit invocationId (to get latest run), but does not explicitly state when to use this tool vs alternatives like dbt-failed-tests or dbt-list-runs. No when-not or alternative names provided, just a basic use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/us-all/dbt-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server