Skip to main content
Glama

create_evaluation

Evaluate a trained or base model against a dataset with customizable evaluators. Specify dataset and evaluator IDs to run code_execution, similarity, or llm_judge evaluations.

Instructions

Create a new model evaluation. Run your trained model or a base model against a dataset using selected evaluators. Use list_evaluators to see available evaluators (e.g. code_execution, similarity, llm_judge).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameNoName for this evaluation run
user_model_idNoID of your trained model to evaluate. Either this or base_model is required.
base_modelNoHuggingFace model ID to evaluate (e.g. 'Qwen/Qwen2.5-Coder-7B-Instruct'). Either this or user_model_id is required.
dataset_idYesID of the evaluation dataset to use. Must be a dataset marked for_evaluation.
evaluator_idsYesList of evaluator IDs to run (use list_evaluators to see options)
max_samplesNoMaximum samples to evaluate (default: all)

Implementation Reference

  • src/mcp.ts:549-582 (registration)
    Tool registration for 'create_evaluation' in the MCP server's tools list, defining its name, description, and input schema.
    {
      name: "create_evaluation",
      description:
        "Create a new model evaluation. Run your trained model or a base model against a dataset using selected evaluators. " +
        "Use list_evaluators to see available evaluators (e.g. code_execution, similarity, llm_judge).",
      inputSchema: {
        type: "object" as const,
        properties: {
          name: { type: "string", description: "Name for this evaluation run" },
          user_model_id: {
            type: "string",
            description: "ID of your trained model to evaluate. Either this or base_model is required.",
          },
          base_model: {
            type: "string",
            description: "HuggingFace model ID to evaluate (e.g. 'Qwen/Qwen2.5-Coder-7B-Instruct'). Either this or user_model_id is required.",
          },
          dataset_id: {
            type: "string",
            description: "ID of the evaluation dataset to use. Must be a dataset marked for_evaluation.",
          },
          evaluator_ids: {
            type: "array",
            items: { type: "string" },
            description: "List of evaluator IDs to run (use list_evaluators to see options)",
          },
          max_samples: {
            type: "number",
            description: "Maximum samples to evaluate (default: all)",
          },
        },
        required: ["dataset_id", "evaluator_ids"],
      },
    },
  • Handler for the 'create_evaluation' tool call. Validates that either user_model_id or base_model is provided, then calls the client's createEvaluation method.
    case "create_evaluation":
      if (!args?.user_model_id && !args?.base_model) {
        return {
          content: [{ type: "text", text: "Error: either user_model_id or base_model is required" }],
          isError: true,
        };
      }
      result = await getClient().createEvaluation({
        name: args?.name as string | undefined,
        user_model_id: args?.user_model_id as string | undefined,
        base_model: args?.base_model as string | undefined,
        dataset_id: args!.dataset_id as string,
        evaluator_ids: args!.evaluator_ids as string[],
        max_samples: args?.max_samples as number | undefined,
      });
      break;
  • Input schema for create_evaluation: defines properties name, user_model_id, base_model, dataset_id, evaluator_ids, max_samples, with required fields dataset_id and evaluator_ids.
    inputSchema: {
      type: "object" as const,
      properties: {
        name: { type: "string", description: "Name for this evaluation run" },
        user_model_id: {
          type: "string",
          description: "ID of your trained model to evaluate. Either this or base_model is required.",
        },
        base_model: {
          type: "string",
          description: "HuggingFace model ID to evaluate (e.g. 'Qwen/Qwen2.5-Coder-7B-Instruct'). Either this or user_model_id is required.",
        },
        dataset_id: {
          type: "string",
          description: "ID of the evaluation dataset to use. Must be a dataset marked for_evaluation.",
        },
        evaluator_ids: {
          type: "array",
          items: { type: "string" },
          description: "List of evaluator IDs to run (use list_evaluators to see options)",
        },
        max_samples: {
          type: "number",
          description: "Maximum samples to evaluate (default: all)",
        },
      },
      required: ["dataset_id", "evaluator_ids"],
    },
  • Client-side createEvaluation method that sends a POST request to /api/v1/evaluations with the provided parameters.
    async createEvaluation(params: {
      name?: string;
      user_model_id?: string;
      base_model?: string;
      dataset_id: string;
      evaluator_ids: string[];
      max_samples?: number;
    }): Promise<any> {
      return this.request("POST", "/api/v1/evaluations", params);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It implies a run action but doesn't disclose side effects like cost, asynchronous behavior, or resource usage. The description is too brief for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, with the purpose front-loaded. Every sentence adds meaningful information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the schema covers all parameters and there is no output schema, the description provides an adequate but minimal overview. It could be more complete by noting return values or asynchronous behavior, but it suffices for a straightforward creation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes all parameters with 100% coverage. The description adds value by referencing list_evaluators for available evaluators and clarifying that either user_model_id or base_model is required (though this contradicts the schema's optionality). This extra context helps the agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a new model evaluation and explains the process (running a model against a dataset with evaluators). It distinguishes itself from sibling tools like list_evaluators and show_evaluation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions using list_evaluators to find available evaluators, which provides some guidance. However, it doesn't explicitly state when not to use this tool, such as when to use estimate_evaluation instead, or clarify the choice between user_model_id and base_model.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cerebrixos-org/tuning-engines-cli'

If you have feedback or need assistance with the MCP directory API, please join our Discord server