Skip to main content
Glama
OrygnsCode

opa-mcp-server

Evaluate Rego with execution trace

rego_eval_with_explain

Evaluate a Rego query with full tracing to see why rules fired or didn't. Returns a structured trace for debugging policy decisions.

Instructions

Evaluate with --explain=full and return a structured trace alongside the result. Use this when an agent needs to see why a rule fired (or didn't) — the trace is the basis for rego_explain_decision.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesRego query to evaluate, e.g. "data.example.allow".
sourceNoInline Rego policy source. Mutually exclusive with `paths`.
pathsNoPolicy / data file or directory paths. Each must be inside an allowed root.
inputNoInline input document.
inputPathNoPath to a JSON input file. Mutually exclusive with `input`.
unknownsNoRefs to treat as unknown during partial evaluation.
partialNoRun partial evaluation rather than full evaluation.
strictBuiltinErrorsNoTreat builtin errors as fatal instead of returning undefined.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral traits. It indicates evaluation with `--explain=full` and a structured trace, implying read-only behavior, but does not mention authorization, side effects, or constraints (e.g., no destructive actions). A 3 is appropriate given the lack of explicit safety disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no unnecessary words. First sentence states action and key detail (`--explain=full`), second sentence provides usage guidance. Information is front-loaded and each sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 8 parameters and no output schema, the description is minimal. It doesn't describe the return format of the structured trace, pagination, error handling, or how to interpret results. With siblings like `rego_explain_decision`, the trace's role is hinted but not fully detailed. Score 3 reflects adequate but incomplete coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so baseline is 3. The description does not add parameter-specific meaning; for instance, it doesn't explain that `query` is the Rego expression evaluated or that `source`/`paths` are mutually exclusive (though schema hints at mutual exclusivity via description). No additional parameter guidance is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies evaluating Rego with `--explain=full` and returning a structured trace. It uses a specific verb and resource, and the title 'Evaluate Rego with execution trace' differentiates it from siblings like `rego_eval` (no trace) and `rego_explain_decision` (which consumes the trace).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description states when to use: 'when an agent needs to see why a rule fired (or didn't)'. It links the trace to `rego_explain_decision`, implying a two-step workflow. However, it does not explicitly exclude use cases or name alternatives like `rego_eval` for simpler evaluation without trace.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OrygnsCode/opa-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server