Evaluate Rego query
rego_evalEvaluates Rego queries against policies and input documents, returning standard OPA results. Supports full and partial evaluation.
Instructions
Evaluate a Rego query against a policy and an input document using opa eval. Returns the standard {result: [...]} shape. The bread-and-butter authoring tool.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Rego query to evaluate, e.g. "data.example.allow". | |
| source | No | Inline Rego policy source. Mutually exclusive with `paths`. | |
| paths | No | Policy / data file or directory paths. Each must be inside an allowed root. | |
| input | No | Inline input document. | |
| inputPath | No | Path to a JSON input file. Mutually exclusive with `input`. | |
| unknowns | No | Refs to treat as unknown during partial evaluation. | |
| partial | No | Run partial evaluation rather than full evaluation. | |
| strictBuiltinErrors | No | Treat builtin errors as fatal instead of returning undefined. |
Implementation Reference
- src/tools/evaluation/eval.ts:25-27 (handler)The actual handler for the `rego_eval` tool — calls `runEval(opa, config, args, {})` (no extra flags). This is the anonymous async function passed as the third argument to server.registerTool for 'rego_eval'
async (args) => { return withToolEnvelope<RegoEvalOutput>(config, () => runEval(opa, config, args, {})); }, - Input schema shared by rego_eval and its variants. Defines fields: query, source, paths, input, inputPath, unknowns, partial, strictBuiltinErrors — all via Zod.
export const SharedEvalInput = { query: z.string().min(1).describe('Rego query to evaluate, e.g. "data.example.allow".'), source: z .string() .optional() .describe('Inline Rego policy source. Mutually exclusive with `paths`.'), paths: z .array(z.string()) .optional() .describe('Policy / data file or directory paths. Each must be inside an allowed root.'), input: z.unknown().optional().describe('Inline input document.'), inputPath: z .string() .optional() .describe('Path to a JSON input file. Mutually exclusive with `input`.'), unknowns: z .array(z.string()) .optional() .describe('Refs to treat as unknown during partial evaluation.'), partial: z.boolean().optional().describe('Run partial evaluation rather than full evaluation.'), strictBuiltinErrors: z .boolean() .optional() .describe('Treat builtin errors as fatal instead of returning undefined.'), }; - Output type for rego_eval: result, errors, metrics, explanation, profile, coverage — all optional.
export interface RegoEvalOutput { result?: unknown[]; errors?: unknown[]; metrics?: Record<string, unknown>; explanation?: unknown[]; profile?: unknown[]; coverage?: unknown; } - Shared `runEval` helper that validates inputs, constructs the EvalInput, calls `opa.eval()`, and parses the JSON result into a ToolEnvelope<RegoEvalOutput>. Used by rego_eval and all its variants.
export async function runEval( opa: OpaCli, config: Config, args: EvalArgs, flags: EvalFlags, ): Promise<ToolEnvelope<RegoEvalOutput>> { if (!args.source && !args.paths?.length) { return err( 'INVALID_INPUT', 'rego_eval requires either `source` or at least one entry in `paths`.', ); } if (args.input !== undefined && args.inputPath) { return err('INVALID_INPUT', 'rego_eval accepts either `input` or `inputPath`, not both.'); } const evalInput: EvalInput = { query: args.query }; if (args.source !== undefined) evalInput.source = args.source; if (args.paths?.length) { const validation = validatePaths(args.paths, config, { mustExist: true }); if (!validation.ok) return validation.error; evalInput.paths = validation.resolved; } if (args.input !== undefined) { evalInput.input = args.input; } else if (args.inputPath) { const inputPathValidation = validatePaths([args.inputPath], config, { mustExist: true }); if (!inputPathValidation.ok) return inputPathValidation.error; evalInput.inputPath = inputPathValidation.resolved[0]; } if (args.partial) evalInput.partial = true; if (args.unknowns?.length) evalInput.unknowns = args.unknowns; if (args.strictBuiltinErrors) evalInput.strictBuiltinErrors = true; if (flags.explain) evalInput.explain = flags.explain; if (flags.profile) evalInput.profile = true; if (flags.coverage) evalInput.coverage = true; if (flags.metrics) evalInput.metrics = true; const result = await opa.eval(evalInput); const subprocessFailure = mapSubprocessFailure(result, 'opa'); if (subprocessFailure) return subprocessFailure; // `opa eval` returns exit code 0 even when the query produces no // results or partial results. A non-zero exit means a hard error // (parse, type, runtime). Output JSON is on stdout. const parsed = tryParseJson<RegoEvalOutput>(result.stdout); if (result.exitCode !== 0) { return err('EVAL_ERROR', 'opa eval exited with an error.', { details: parsed ?? { stderr: result.stderr.trim(), stdout: result.stdout.trim() }, }); } if (parsed === undefined) { return err('UNKNOWN_ERROR', 'opa eval produced no parseable JSON output.', { details: { stdout: result.stdout.trim() }, }); } return ok<RegoEvalOutput>(parsed); } - src/tools/evaluation/eval.ts:14-74 (registration)Registration of `rego_eval` (and its three variants: with_explain, with_profile, with_coverage) via `server.registerTool`. The function `registerRegoEval` is called from `registerEvaluationTools`.
export function registerRegoEval(server: McpServer, config: Config): void { const opa = new OpaCli(config); server.registerTool( 'rego_eval', { title: 'Evaluate Rego query', description: 'Evaluate a Rego query against a policy and an input document using `opa eval`. Returns the standard `{result: [...]}` shape. The bread-and-butter authoring tool.', inputSchema: SharedEvalInput, }, async (args) => { return withToolEnvelope<RegoEvalOutput>(config, () => runEval(opa, config, args, {})); }, ); server.registerTool( 'rego_eval_with_explain', { title: 'Evaluate Rego with execution trace', description: "Evaluate with `--explain=full` and return a structured trace alongside the result. Use this when an agent needs to see why a rule fired (or didn't) — the trace is the basis for `rego_explain_decision`.", inputSchema: SharedEvalInput, }, async (args) => { return withToolEnvelope<RegoEvalOutput>(config, () => runEval(opa, config, args, { explain: 'full' }), ); }, ); server.registerTool( 'rego_eval_with_profile', { title: 'Evaluate Rego with profiling', description: 'Evaluate with `--profile` and return per-rule timing and evaluation counts. Use this to find hot rules in slow policies.', inputSchema: SharedEvalInput, }, async (args) => { return withToolEnvelope<RegoEvalOutput>(config, () => runEval(opa, config, args, { profile: true, metrics: true }), ); }, ); server.registerTool( 'rego_eval_with_coverage', { title: 'Evaluate Rego with coverage', description: "Evaluate with `--coverage` and return per-line coverage data. Useful for verifying that tests actually exercise the rules they're meant to.", inputSchema: SharedEvalInput, }, async (args) => { return withToolEnvelope<RegoEvalOutput>(config, () => runEval(opa, config, args, { coverage: true }), ); }, ); }