Skip to main content
Glama

duck_judge

Evaluates and ranks multiple AI model responses using comparative criteria to identify the most effective solution for your query.

Instructions

Have one duck evaluate and rank other ducks' responses. Use after duck_council to get a comparative evaluation.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
responsesYesArray of duck responses to evaluate (from duck_council output)
judgeNoProvider name of the judge duck (optional, uses first available)
criteriaNoEvaluation criteria (default: ["accuracy", "completeness", "clarity"])
personaNoJudge persona (e.g., "senior engineer", "security expert")

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nesquikm/mcp-rubber-duck'

If you have feedback or need assistance with the MCP directory API, please join our Discord server