elenchus_evaluate_convergence
Generate prompts for LLMs to assess convergence quality in adversarial verification debates, enabling systematic evaluation of security, correctness, and performance analysis.
Instructions
Get LLM evaluation prompt for convergence quality assessment. Returns a prompt to send to an LLM.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| sessionId | Yes | Session ID to evaluate |