Skip to main content
Glama

Evaluate With LLM Judge

evaluate_with_llm_judge

Score agent output semantically using an LLM judge for accuracy, helpfulness, safety, correctness, or faithfulness. Returns a calibrated 0-1 score with rationale and per-dimension breakdown.

Instructions

Score agent output using an LLM as the judge (Anthropic or OpenAI). Returns a calibrated 0..1 score with rationale, per-dimension breakdown, and exact cost.

Sibling tools — evaluate_output runs heuristic rules (free, deterministic, ~ms latency, no API key needed); this tool runs LLM-based semantic scoring (paid, 1-10s latency, requires API key). verify_citations is a SPECIALIZED form of LLM judging that focuses on citation grounding only. log_trace / get_traces handle trace I/O; list_rules / deploy_rule / delete_rule manage heuristic-rule lifecycle. evaluate_with_llm_judge is the GENERAL semantic-scoring path.

Behavior. Calls an external LLM API (Anthropic or OpenAI) — costs money per call, takes 1-10 seconds, respects an IRIS_LLM_JUDGE_MAX_COST_USD_PER_EVAL cap. Non-deterministic at temperature > 0; default temperature=0 gives near-deterministic scores. Writes one eval_result row to Iris storage (linked to trace_id if provided) plus captures provider response id + latency + token counts + cost in the rule_results payload. Rate-limited to 20 req/min on HTTP MCP; your LLM provider also enforces its own rate limits (we transparently retry once on 429).

Output shape. Returns JSON: { "id": "<uuid>", "score": 0..1, "passed": boolean, "rationale": string, "dimensions": {...}, "model": string, "provider": "anthropic"|"openai", "template": string, "input_tokens": number, "output_tokens": number, "cost_usd": number, "latency_ms": number }. dimensions has per-dimension sub-scores (e.g., accuracy template returns {factual_claims, citations, internal_consistency}).

Use when heuristic rules (via evaluate_output) are too coarse for the quality signal you need — semantic correctness, factual accuracy vs a reference, RAG faithfulness to sources, nuanced safety/helpfulness. Pick the template that matches: accuracy (hallucination detection), helpfulness (does it address the ask), safety (harm potential beyond regex PII), correctness (vs reference answer — pass expected), faithfulness (RAG grounding — pass source_material).

Don't use for simple regex/length/keyword checks (use evaluate_output with heuristic rules — they're free, deterministic, 1000x faster). Don't use without an API key set (IRIS_ANTHROPIC_API_KEY or IRIS_OPENAI_API_KEY). Don't use on very large outputs (>8K tokens) without raising max_cost_usd — the pre-check will refuse the call.

Parameters. model is required (no default — pick consciously since cost varies 100x across models). provider is auto-detected from the model name; override only for ambiguous IDs. expected is REQUIRED when template="correctness" (the reference answer to compare against); ignored for other templates. source_material is REQUIRED when template="faithfulness" (the RAG sources to ground against); ignored otherwise. input is optional but improves scoring on helpfulness/safety templates (gives the judge the user prompt that produced the output). max_cost_usd defaults to env var IRIS_LLM_JUDGE_MAX_COST_USD_PER_EVAL or $0.25 — the worst-case cost is computed BEFORE the call (input_tokens × prompt_price + max_output_tokens × completion_price); call refused upfront if it would exceed. max_output_tokens caps the judge response (default 512, max 4096); higher = more rationale detail + more cost. temperature default 0 (deterministic). timeout_ms default 60000. trace_id optional but recommended (links eval to trace in dashboard). Defaults: temperature=0, max_output_tokens=512, max_cost_usd=$0.25, timeout_ms=60000.

Error modes. Throws when the required API key env var is missing. Throws when the estimated worst-case cost exceeds max_cost_usd (raise the cap or trim prompts). Throws LLMJudgeError on provider errors — kind=auth on 401/403, rate_limit on 429 (auto-retried once), server_error on 5xx, timeout on abort, malformed_response when the judge fails to emit valid JSON on both attempts. Throws "Unknown model" for unsupported model IDs — update src/eval/llm-judge/pricing.ts first.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
outputYesThe agent output text to evaluate
templateYesJudge dimension: accuracy (factual correctness), helpfulness (does it address the ask), safety (harm potential), correctness (vs reference answer — requires `expected`), faithfulness (RAG grounding — requires `source_material`).
modelYesModel ID. Supported: anthropic = claude-opus-4-7 | claude-sonnet-4-6 | claude-haiku-4-5 | claude-haiku-4-5-20251001; openai = gpt-4o | gpt-4o-mini | o1-mini.
providerNoAuto-detected from model when omitted
inputNoUser question / prompt that produced the output (improves accuracy for helpfulness/safety)
expectedNoReference answer (required for correctness template)
source_materialNoProvided RAG sources (required for faithfulness template)
trace_idNoLink this evaluation to a trace
max_cost_usdNoCost cap in USD; defaults to IRIS_LLM_JUDGE_MAX_COST_USD_PER_EVAL or 0.25
max_output_tokensNoJudge output token cap; default 512
temperatureNoSampling temperature; default 0 (deterministic)
timeout_msNoPer-request timeout; default 60_000
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only give minimal flags (readOnlyHint=false, destructiveHint=false, idempotentHint=false, openWorldHint=true). The description compensates fully: costs money, 1-10s latency, non-deterministic at temperature>0, writes to storage, rate-limited to 20 req/min, error modes with kind specifics, and a pre-check that refuses calls exceeding cost cap. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Despite length, the description is well-structured with logical sections: purpose, sibling differentiation, behavior, output shape, usage guidance, parameter details, error modes. Every sentence adds value and the most important information (purpose, when to use) is front-loaded. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 12 parameters, 5 templates, complex error modes, costing, and rate limits, the description covers all critical aspects. It explicitly lists the output fields in JSON format (no output schema provided). The error modes section is thorough. The cost pre-check behavior is clearly explained. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all 12 parameters described), but the description adds substantial meaning: explains dependencies (expected required for correctness, source_material for faithfulness), defaults (temperature=0, max_output_tokens=512), cost implications (max_cost_usd, max_output_tokens), and provider auto-detection logic. This goes well beyond the schema's minLength/enum/description fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear verb+resource: 'Score agent output using an LLM as the judge (Anthropic or OpenAI).' It then lists specific outputs (calibrated 0..1 score, rationale, per-dimension breakdown, cost). The sibling tools section explicitly distinguishes it from evaluate_output (heuristic rules), verify_citations (specialized citation judging), and trace I/O tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use conditions: 'Use when heuristic rules (via evaluate_output) are too coarse...' Lists when not to use: 'Don't use for simple regex/length/keyword checks... Don't use without an API key... Don't use on very large outputs...' and gives alternatives (evaluate_output). Also explains which template to pick for each use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/iris-eval/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server