Skip to main content
Glama

evaluate_llm_response_on_multiple_criteria

Assess LLM responses against multiple evaluation criteria using Atla's models to provide structured scores and critiques for each specified metric.

Instructions

Evaluate an LLM's response to a prompt across multiple evaluation criteria.

This function uses an Atla evaluation model under the hood to return a list of dictionaries, each containing an evaluation score and critique for a given criteria. Returns: list[dict[str, str]]: A list of dictionaries containing the evaluation score and critique, in the format `{"score": <score>, "critique": <critique>}`. The order of the dictionaries in the list will match the order of the criteria in the `evaluation_criteria_list` argument.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
evaluation_criteria_listYes
llm_promptYesThe prompt given to an LLM to generate the `llm_response` to be evaluated.
llm_responseYesThe output generated by the model in response to the `llm_prompt`, which needs to be evaluated.
expected_llm_outputNoA reference or ideal answer to compare against the `llm_response`. This is useful in cases where a specific output is expected from the model. Defaults to None.
llm_contextNoAdditional context or information provided to the model during generation. This is useful in cases where the model was provided with additional information that is not part of the `llm_prompt` or `expected_llm_output` (e.g., a RAG retrieval context). Defaults to None.
model_idNoThe Atla model ID to use for evaluation. `atla-selene` is the flagship Atla model, optimized for the highest all-round performance. `atla-selene-mini` is a compact model that is generally faster and cheaper to run. Defaults to `atla-selene`.atla-selene

Implementation Reference

  • The main asynchronous handler function for the 'evaluate_llm_response_on_multiple_criteria' tool. It evaluates the LLM response by creating parallel tasks for each criterion using the single-criterion helper and gathers the results.
    async def evaluate_llm_response_on_multiple_criteria( ctx: Context, evaluation_criteria_list: list[AnnotatedEvaluationCriteria], llm_prompt: AnnotatedLlmPrompt, llm_response: AnnotatedLlmResponse, expected_llm_output: AnnotatedExpectedLlmOutput = None, llm_context: AnnotatedLlmContext = None, model_id: AnnotatedModelId = "atla-selene", ) -> list[dict[str, str]]: """Evaluate an LLM's response to a prompt across *multiple* evaluation criteria. This function uses an Atla evaluation model under the hood to return a list of dictionaries, each containing an evaluation score and critique for a given criteria. Returns: list[dict[str, str]]: A list of dictionaries containing the evaluation score and critique, in the format `{"score": <score>, "critique": <critique>}`. The order of the dictionaries in the list will match the order of the criteria in the `evaluation_criteria_list` argument. """ tasks = [ evaluate_llm_response( ctx=ctx, evaluation_criteria=criterion, llm_prompt=llm_prompt, llm_response=llm_response, expected_llm_output=expected_llm_output, llm_context=llm_context, model_id=model_id, ) for criterion in evaluation_criteria_list ] results = await asyncio.gather(*tasks) return results
  • Registers the 'evaluate_llm_response_on_multiple_criteria' function as an MCP tool using the FastMCP.tool() decorator.
    mcp.tool()(evaluate_llm_response_on_multiple_criteria)
  • Helper function that performs evaluation on a single criterion using the Atla client. Called in parallel by the main handler for each criterion.
    async def evaluate_llm_response( ctx: Context, evaluation_criteria: AnnotatedEvaluationCriteria, llm_prompt: AnnotatedLlmPrompt, llm_response: AnnotatedLlmResponse, expected_llm_output: AnnotatedExpectedLlmOutput = None, llm_context: AnnotatedLlmContext = None, model_id: AnnotatedModelId = "atla-selene", ) -> dict[str, str]: """Evaluate an LLM's response to a prompt using a given evaluation criteria. This function uses an Atla evaluation model under the hood to return a dictionary containing a score for the model's response and a textual critique containing feedback on the model's response. Returns: dict[str, str]: A dictionary containing the evaluation score and critique, in the format `{"score": <score>, "critique": <critique>}`. """ state = cast(MCPState, ctx.request_context.lifespan_context) result = await state.atla_client.evaluation.create( model_id=model_id, model_input=llm_prompt, model_output=llm_response, evaluation_criteria=evaluation_criteria, expected_model_output=expected_llm_output, model_context=llm_context, ) return { "score": result.result.evaluation.score, "critique": result.result.evaluation.critique, }
  • Pydantic type annotation with JSON schema description and examples for the evaluation criteria parameter, key to the tool's input validation.
    AnnotatedEvaluationCriteria = Annotated[ str, WithJsonSchema( { "description": dedent( """The specific criteria or instructions on which to evaluate the \ model output. A good evaluation criteria should provide the model \ with: (1) a description of the evaluation task, (2) a rubric of \ possible scores and their corresponding criteria, and (3) a \ final sentence clarifying expected score format. A good evaluation \ criteria should also be specific and focus on a single aspect of \ the model output. To evaluate a model's response on multiple \ criteria, use the `evaluate_llm_response_on_multiple_criteria` \ function and create individual criteria for each relevant evaluation \ task. Typical rubrics score responses either on a Likert scale from \ 1 to 5 or binary scale with scores of 'Yes' or 'No', depending on \ the specific evaluation task.""" ), "examples": [ dedent( """Evaluate how well the response fulfills the requirements of the instruction by providing relevant information. This includes responding in accordance with the explicit and implicit purpose of given instruction. Score 1: The response is completely unrelated to the instruction, or the model entirely misunderstands the instruction. Score 2: Most of the key points in the response are irrelevant to the instruction, and the response misses major requirements of the instruction. Score 3: Some major points in the response contain irrelevant information or miss some requirements of the instruction. Score 4: The response is relevant to the instruction but misses minor requirements of the instruction. Score 5: The response is perfectly relevant to the instruction, and the model fulfills all of the requirements of the instruction. Your score should be an integer between 1 and 5.""" # noqa: E501 ), dedent( """Evaluate whether the information provided in the response is correct given the reference response. Ignore differences in punctuation and phrasing between the response and reference response. It is okay if the response contains more information than the reference response, as long as it does not contain any conflicting statements. Binary scoring "No": The response is not factually accurate when compared against the reference response or includes conflicting statements. "Yes": The response is supported by the reference response and does not contain conflicting statements. Your score should be either "No" or "Yes". """ # noqa: E501 ), ], } ), ]

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/atla-ai/atla-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server