Skip to main content
Glama

elenchus_submit_llm_evaluation

Submit LLM evaluation responses for adversarial code verification, enabling systematic analysis of security, correctness, and performance issues through the Socratic method.

Instructions

Submit LLM evaluation response. Call this after receiving an LLM response to an evaluation prompt.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdYesSession ID
evaluationTypeYesType of evaluation
llmResponseYesLLM response to the evaluation prompt
targetIdNoTarget ID (issue ID for severity/falsePositive evaluations)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jhlee0409/elenchus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server