Skip to main content
Glama
OrygnsCode

opa-mcp-server

Evaluate Rego with coverage

rego_eval_with_coverage

Evaluate Rego queries with coverage to verify which policy lines are exercised by tests.

Instructions

Evaluate with --coverage and return per-line coverage data. Useful for verifying that tests actually exercise the rules they're meant to.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesRego query to evaluate, e.g. "data.example.allow".
sourceNoInline Rego policy source. Mutually exclusive with `paths`.
pathsNoPolicy / data file or directory paths. Each must be inside an allowed root.
inputNoInline input document.
inputPathNoPath to a JSON input file. Mutually exclusive with `input`.
unknownsNoRefs to treat as unknown during partial evaluation.
partialNoRun partial evaluation rather than full evaluation.
strictBuiltinErrorsNoTreat builtin errors as fatal instead of returning undefined.

Implementation Reference

  • The handler function for 'rego_eval_with_coverage'. It calls runEval with { coverage: true }, which adds --coverage flag to opa eval CLI call.
    server.registerTool(
      'rego_eval_with_coverage',
      {
        title: 'Evaluate Rego with coverage',
        description:
          "Evaluate with `--coverage` and return per-line coverage data. Useful for verifying that tests actually exercise the rules they're meant to.",
        inputSchema: SharedEvalInput,
      },
      async (args) => {
        return withToolEnvelope<RegoEvalOutput>(config, () =>
          runEval(opa, config, args, { coverage: true }),
        );
      },
  • SharedEvalInput is the Zod schema used for all rego_eval variants including rego_eval_with_coverage. The RegoEvalOutput interface defines the shape of the output including the optional 'coverage' field.
    export const SharedEvalInput = {
      query: z.string().min(1).describe('Rego query to evaluate, e.g. "data.example.allow".'),
      source: z
        .string()
        .optional()
        .describe('Inline Rego policy source. Mutually exclusive with `paths`.'),
      paths: z
        .array(z.string())
        .optional()
        .describe('Policy / data file or directory paths. Each must be inside an allowed root.'),
      input: z.unknown().optional().describe('Inline input document.'),
      inputPath: z
        .string()
        .optional()
        .describe('Path to a JSON input file. Mutually exclusive with `input`.'),
      unknowns: z
        .array(z.string())
        .optional()
        .describe('Refs to treat as unknown during partial evaluation.'),
      partial: z.boolean().optional().describe('Run partial evaluation rather than full evaluation.'),
      strictBuiltinErrors: z
        .boolean()
        .optional()
        .describe('Treat builtin errors as fatal instead of returning undefined.'),
    };
    
    export interface RegoEvalOutput {
      result?: unknown[];
      errors?: unknown[];
      metrics?: Record<string, unknown>;
      explanation?: unknown[];
      profile?: unknown[];
      coverage?: unknown;
    }
  • runEval is the shared helper used by all rego_eval variants. It passes the coverage flag through to the OPA CLI and runs opa eval.
    export async function runEval(
      opa: OpaCli,
      config: Config,
      args: EvalArgs,
      flags: EvalFlags,
    ): Promise<ToolEnvelope<RegoEvalOutput>> {
      if (!args.source && !args.paths?.length) {
        return err(
          'INVALID_INPUT',
          'rego_eval requires either `source` or at least one entry in `paths`.',
        );
      }
      if (args.input !== undefined && args.inputPath) {
        return err('INVALID_INPUT', 'rego_eval accepts either `input` or `inputPath`, not both.');
      }
    
      const evalInput: EvalInput = { query: args.query };
      if (args.source !== undefined) evalInput.source = args.source;
    
      if (args.paths?.length) {
        const validation = validatePaths(args.paths, config, { mustExist: true });
        if (!validation.ok) return validation.error;
        evalInput.paths = validation.resolved;
      }
    
      if (args.input !== undefined) {
        evalInput.input = args.input;
      } else if (args.inputPath) {
        const inputPathValidation = validatePaths([args.inputPath], config, { mustExist: true });
        if (!inputPathValidation.ok) return inputPathValidation.error;
        evalInput.inputPath = inputPathValidation.resolved[0];
      }
    
      if (args.partial) evalInput.partial = true;
      if (args.unknowns?.length) evalInput.unknowns = args.unknowns;
      if (args.strictBuiltinErrors) evalInput.strictBuiltinErrors = true;
    
      if (flags.explain) evalInput.explain = flags.explain;
      if (flags.profile) evalInput.profile = true;
      if (flags.coverage) evalInput.coverage = true;
      if (flags.metrics) evalInput.metrics = true;
    
      const result = await opa.eval(evalInput);
    
      const subprocessFailure = mapSubprocessFailure(result, 'opa');
      if (subprocessFailure) return subprocessFailure;
    
      // `opa eval` returns exit code 0 even when the query produces no
      // results or partial results. A non-zero exit means a hard error
      // (parse, type, runtime). Output JSON is on stdout.
      const parsed = tryParseJson<RegoEvalOutput>(result.stdout);
    
      if (result.exitCode !== 0) {
        return err('EVAL_ERROR', 'opa eval exited with an error.', {
          details: parsed ?? { stderr: result.stderr.trim(), stdout: result.stdout.trim() },
        });
      }
    
      if (parsed === undefined) {
        return err('UNKNOWN_ERROR', 'opa eval produced no parseable JSON output.', {
          details: { stdout: result.stdout.trim() },
        });
      }
      return ok<RegoEvalOutput>(parsed);
    }
  • OpaCli.eval() method that actually runs the 'opa eval' subprocess. When input.coverage is set, adds '--coverage' flag to the CLI command.
    async eval(input: EvalInput): Promise<SpawnResult> {
      // Inline source becomes a temp file added to --data.
      if (input.source !== undefined) {
        const { source, ...rest } = input;
        void source;
        return this.withTempSource(input.source, (sourcePath) =>
          this.eval({
            ...rest,
            paths: [...(input.paths ?? []), sourcePath],
          }),
        );
      }
    
      const args = ['eval', '--format=json'];
      for (const path of input.paths ?? []) args.push('--data', path);
      if (input.inputPath) args.push('--input', input.inputPath);
      if (input.explain) args.push('--explain', input.explain);
      if (input.profile) args.push('--profile');
      if (input.coverage) args.push('--coverage');
      if (input.metrics) args.push('--metrics');
      if (input.instrument) args.push('--instrument');
      if (input.partial) args.push('--partial');
      for (const ref of input.unknowns ?? []) args.push('--unknowns', ref);
      if (input.strictBuiltinErrors) args.push('--strict-builtin-errors');
      if (input.capabilities) args.push('--capabilities', input.capabilities);
      if (input.schemaDir) args.push('--schema', input.schemaDir);
    
      let stdin: string | undefined;
      if (input.input !== undefined) {
        args.push('--stdin-input');
        stdin = JSON.stringify(input.input);
      }
    
      args.push(input.query);
      return this.run(args, stdin);
    }
  • registerEvaluationTools() is called by the top-level registration entry point (src/tools/index.ts). It calls registerRegoEval() which registers all four rego_eval variants including rego_eval_with_coverage.
    export function registerEvaluationTools(server: McpServer, config: Config): void {
      registerRegoEval(server, config); // registers rego_eval + 3 variants
      registerRegoTest(server, config);
      registerRegoBench(server, config);
      registerRegoCompileQuery(server, config);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It mentions `--coverage` and per-line coverage data but lacks details on error handling, permissions, side effects, or output format. The description is insufficient for a mutation tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action and specific output. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite 8 parameters and no output schema, the description does not explain the return format, error states, or relationships between parameters (e.g., `partial` and `unknowns`). The description is too minimal for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with all parameters described in the input schema. The description adds no additional meaning to the parameters beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool evaluates Rego with coverage and returns per-line coverage data. It uses specific verb and resource, distinguishing it from sibling tools like rego_eval, rego_eval_with_explain, and rego_eval_with_profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a clear use case: 'verifying that tests actually exercise the rules they're meant to.' It implies when to use but lacks explicit when-not-to-use or comparison to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OrygnsCode/opa-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server