Skip to main content
Glama

llm_benchmark

Run benchmarks with multiple prompts to evaluate LLM performance metrics, including response time and quality, for model comparison and testing.

Instructions

Ejecuta un benchmark con mĂșltiples prompts para evaluar rendimiento del modelo

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
baseURLNoURL del servidor OpenAI-compatible (ej: http://localhost:1234/v1, http://localhost:11434/v1)
apiKeyNoAPI Key (requerida para OpenAI/Azure, opcional para servidores locales)
promptsYesLista de prompts para el benchmark
modelNoID del modelo
maxTokensNoMax tokens por respuesta (default: 256)
temperatureNoTemperatura (default: 0.7)
topPNoTop P para nucleus sampling
runsNoEjecuciones por prompt (default: 1)

Implementation Reference

  • Main handler function for the llm_benchmark tool. Validates input with BenchmarkSchema, creates an LLMClient instance, executes benchmark via client.runBenchmark, and formats results into a comprehensive markdown report including summary stats and detailed per-prompt metrics.
    async llm_benchmark(args: z.infer<typeof BenchmarkSchema>) {
      const client = getClient(args);
      const { results, summary } = await client.runBenchmark(args.prompts, {
        model: args.model,
        maxTokens: args.maxTokens,
        temperature: args.temperature,
        runs: args.runs,
      });
    
      let output = `# 📊 Benchmark Results\n\n`;
      output += `## Resumen\n`;
      output += `- **Prompts totales:** ${summary.totalPrompts}\n`;
      output += `- **Latencia promedio:** ${summary.avgLatencyMs.toFixed(2)} ms\n`;
      output += `- **Tokens/segundo promedio:** ${summary.avgTokensPerSecond.toFixed(2)}\n`;
      output += `- **Total tokens generados:** ${summary.totalTokensGenerated}\n\n`;
      output += `## Resultados Detallados\n\n`;
    
      results.forEach((r, i) => {
        output += `### Prompt ${i + 1}\n`;
        output += `> ${r.prompt.substring(0, 100)}${r.prompt.length > 100 ? "..." : ""}\n\n`;
        output += `- Latencia: ${r.latencyMs} ms\n`;
        output += `- Tokens: ${r.completionTokens}\n`;
        output += `- Velocidad: ${r.tokensPerSecond.toFixed(2)} tok/s\n\n`;
      });
    
      return { content: [{ type: "text" as const, text: output }] };
    },
  • Zod schema defining the input parameters for the llm_benchmark tool, including required prompts array and optional model, token limits, temperature, and run count.
    export const BenchmarkSchema = ConnectionConfigSchema.extend({
      prompts: z.array(z.string()).describe("Lista de prompts para el benchmark"),
      model: z.string().optional().describe("ID del modelo a usar"),
      maxTokens: z.number().optional().default(256).describe("MĂĄximo de tokens por respuesta"),
      temperature: z.number().optional().default(0.7).describe("Temperatura"),
      topP: z.number().optional().describe("Top P para nucleus sampling"),
      runs: z.number().optional().default(1).describe("NĂșmero de ejecuciones por prompt"),
    });
  • src/tools.ts:116-136 (registration)
    MCP tool registration entry in the exported tools array, specifying the name, description, and inputSchema for llm_benchmark to be returned by ListToolsRequest.
    {
      name: "llm_benchmark",
      description: "Ejecuta un benchmark con mĂșltiples prompts para evaluar rendimiento del modelo",
      inputSchema: {
        type: "object" as const,
        properties: {
          ...connectionProperties,
          prompts: {
            type: "array",
            items: { type: "string" },
            description: "Lista de prompts para el benchmark",
          },
          model: { type: "string", description: "ID del modelo" },
          maxTokens: { type: "number", description: "Max tokens por respuesta (default: 256)" },
          temperature: { type: "number", description: "Temperatura (default: 0.7)" },
          topP: { type: "number", description: "Top P para nucleus sampling" },
          runs: { type: "number", description: "Ejecuciones por prompt (default: 1)" },
        },
        required: ["prompts"],
      },
    },
  • src/index.ts:64-65 (registration)
    Dispatch logic in the MCP CallToolRequest handler switch statement that routes execution to the llm_benchmark handler function.
    case "llm_benchmark":
      return await toolHandlers.llm_benchmark(args as any);
  • src/index.ts:42-44 (registration)
    MCP ListToolsRequest handler that returns the tools array containing the llm_benchmark tool definition.
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return { tools };
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'ejecuta un benchmark' but doesn't disclose behavioral traits such as whether this is a read-only operation, potential side effects (e.g., resource consumption, rate limits), authentication needs beyond the apiKey parameter, or what the output looks like (e.g., metrics, timing data). This leaves significant gaps for a tool with 8 parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence in Spanish that directly states the tool's purpose without any fluff. It's appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (8 parameters, no annotations, no output schema, and multiple sibling tools), the description is incomplete. It lacks details on behavioral transparency, usage guidelines, and output expectations, which are crucial for an AI agent to invoke this tool correctly in context. The high parameter count and evaluation nature suggest more guidance is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, meaning all parameters are documented in the schema itself. The description adds no additional meaning beyond what's in the schema (e.g., it doesn't explain how 'prompts' are used in the benchmark or the significance of 'runs'). With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('ejecuta un benchmark') and the resource ('mĂșltiples prompts para evaluar rendimiento del modelo'), providing a specific purpose. However, it doesn't explicitly differentiate this benchmarking tool from sibling tools like 'llm_compare_models' or 'llm_evaluate_coherence', which might also involve model evaluation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'llm_compare_models' and 'llm_evaluate_coherence' that might overlap in evaluation tasks, there's no indication of context, prerequisites, or exclusions to help the agent choose appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ramgeart/llm-mcp-bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server