Skip to main content
Glama

elenchus_evaluate_convergence

Generate prompts for LLMs to assess convergence quality in adversarial verification debates, enabling systematic evaluation of security, correctness, and performance analysis.

Instructions

Get LLM evaluation prompt for convergence quality assessment. Returns a prompt to send to an LLM.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdYesSession ID to evaluate

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jhlee0409/elenchus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server