review_output
Identify errors in AI-generated content through independent adversarial review. Returns structured PASS/FAIL verdicts, quality scores, and categorized severity issues with evidence checklists.
Instructions
Adversarial quality review of any AI-generated output. An independent reviewer assumes the author made mistakes and actively looks for problems. Returns structured verdict (PASS/FAIL/CONDITIONAL_PASS), score (0-100), categorized issues with severity, and evidence-based checklist. Works for any output type: code, content, summaries, translations, data extraction, etc.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| output | Yes | The AI-generated output to review (max 100K chars) | |
| criteria | No | Custom review criteria — what specifically to check for | |
| review_type | No | Review category label (e.g., "code", "content", "factual", "translation") | |
| model | No | Reviewer model ID (default: claude-sonnet-4-6) |