Benchmark Rego query
rego_benchBenchmark a Rego query against policy and input to obtain timing statistics, identifying slow rules.
Instructions
Benchmark a Rego query against a policy + input with opa bench. Returns statistical timing data: iterations, ns/op, and allocation counts. Use this to spot slow rules.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Rego query to benchmark. | |
| paths | No | Policy / data paths to load. Each must be in an allowed root. | |
| input | No | Inline input document. | |
| inputPath | No | Path to a JSON input file. | |
| count | No | Number of benchmark iterations. Defaults to OPA's built-in default. |
Implementation Reference
- src/tools/evaluation/bench.ts:43-101 (handler)The main handler function `registerRegoBench` which registers the 'rego_bench' tool and contains the full execution logic: validates input (inline input vs inputPath), resolves paths, calls `opa.bench(...)`, checks subprocess exit code, parses JSON output, and returns results.
export function registerRegoBench(server: McpServer, config: Config): void { const opa = new OpaCli(config); server.registerTool( 'rego_bench', { title: 'Benchmark Rego query', description: 'Benchmark a Rego query against a policy + input with `opa bench`. Returns statistical timing data: iterations, ns/op, and allocation counts. Use this to spot slow rules.', inputSchema: RegoBenchInput, }, async ({ query, paths, input, inputPath, count }) => { return withToolEnvelope<RegoBenchOutput>(config, async () => { if (input !== undefined && inputPath) { return err( 'INVALID_INPUT', 'rego_bench accepts either `input` or `inputPath`, not both.', ); } let resolvedPaths: string[] | undefined; if (paths?.length) { const validation = validatePaths(paths, config, { mustExist: true }); if (!validation.ok) return validation.error; resolvedPaths = validation.resolved; } let resolvedInputPath: string | undefined; if (inputPath) { const validation = validatePaths([inputPath], config, { mustExist: true }); if (!validation.ok) return validation.error; resolvedInputPath = validation.resolved[0]; } const result = await opa.bench({ query, paths: resolvedPaths, input, inputPath: resolvedInputPath, count, }); const subprocessFailure = mapSubprocessFailure(result, 'opa'); if (subprocessFailure) return subprocessFailure; if (result.exitCode !== 0) { return err('EVAL_ERROR', 'opa bench exited with an error.', { details: { stderr: result.stderr.trim() }, }); } const parsed = tryParseJson<RegoBenchOutput>(result.stdout); if (parsed === undefined) { return err('UNKNOWN_ERROR', 'opa bench produced no parseable JSON output.', { details: { stdout: result.stdout.trim() }, }); } return ok<RegoBenchOutput>(parsed); }); }, ); } - src/tools/evaluation/bench.ts:21-35 (schema)Input schema `RegoBenchInput` defining the Zod schema for the tool's parameters: query (required string), paths (optional string array), input (optional unknown), inputPath (optional string), and count (optional positive integer).
const RegoBenchInput = { query: z.string().min(1).describe('Rego query to benchmark.'), paths: z .array(z.string()) .optional() .describe('Policy / data paths to load. Each must be in an allowed root.'), input: z.unknown().optional().describe('Inline input document.'), inputPath: z.string().optional().describe('Path to a JSON input file.'), count: z .number() .int() .positive() .optional() .describe("Number of benchmark iterations. Defaults to OPA's built-in default."), }; - src/tools/evaluation/bench.ts:37-41 (schema)Output interface `RegoBenchOutput` defining the shape of the benchmark results: iterations (optional number), metrics (optional record), and raw (optional unknown).
export interface RegoBenchOutput { iterations?: number; metrics?: Record<string, unknown>; raw?: unknown; } - src/tools/evaluation/index.ts:11-19 (registration)Tool registration: `registerRegoBench` is imported from './bench.js' and called inside `registerEvaluationTools` to register the tool with the MCP server.
import { registerRegoBench } from './bench.js'; import { registerRegoCompileQuery } from './compile.js'; import { registerRegoEval } from './eval.js'; import { registerRegoTest } from './test.js'; export function registerEvaluationTools(server: McpServer, config: Config): void { registerRegoEval(server, config); // registers rego_eval + 3 variants registerRegoTest(server, config); registerRegoBench(server, config); - src/tools/index.ts:33-39 (registration)Top-level registration: `registerEvaluationTools` (which includes rego_bench) is called from the main `registerTools` function.
import { registerEvaluationTools } from './evaluation/index.js'; import { registerHelperTools } from './helpers/index.js'; import { registerServerManagementTools } from './server-management/index.js'; export function registerTools(server: McpServer, config: Config): void { registerAuthoringTools(server, config); registerEvaluationTools(server, config);