Skip to main content
Glama

llm_quality_report

Generate comprehensive quality reports for LLM models by evaluating benchmarks, coherence, and capabilities through OpenAI-compatible APIs.

Instructions

Genera un reporte completo de calidad del modelo incluyendo benchmark, coherencia y capacidades

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
baseURLNoURL del servidor OpenAI-compatible (ej: http://localhost:1234/v1, http://localhost:11434/v1)
apiKeyNoAPI Key (requerida para OpenAI/Azure, opcional para servidores locales)
modelNoID del modelo a evaluar

Implementation Reference

  • The core handler function that implements the llm_quality_report tool. It performs a benchmark, coherence evaluation, capabilities test on the specified model, compiles metrics, and returns a formatted markdown report with an overall quality score.
    async llm_quality_report(args: z.infer<typeof CapabilitiesSchema>) {
      const client = getClient(args);
      let output = `# 📋 Reporte de Calidad del Modelo\n\n`;
      output += `*Generando reporte completo...*\n\n`;
    
      // 1. Benchmark básico
      const benchmarkPrompts = [
        "Explica qué es la inteligencia artificial en una oración.",
        "¿Cuánto es 25 * 4?",
        "Traduce 'Hello World' al español.",
      ];
    
      const benchmark = await client.runBenchmark(benchmarkPrompts, {
        model: args.model,
        maxTokens: 100,
      });
    
      output += `## 📊 Benchmark de Rendimiento\n\n`;
      output += `- Latencia promedio: **${benchmark.summary.avgLatencyMs.toFixed(0)} ms**\n`;
      output += `- Velocidad: **${benchmark.summary.avgTokensPerSecond.toFixed(2)} tokens/s**\n`;
      output += `- Tokens generados: ${benchmark.summary.totalTokensGenerated}\n\n`;
    
      // 2. Coherencia
      const coherence = await client.evaluateCoherence(
        "¿Cuál es el sentido de la vida?",
        { model: args.model, runs: 3, temperature: 0.7 }
      );
    
      output += `## 🎯 Coherencia\n\n`;
      output += `- Consistencia: **${(coherence.consistency * 100).toFixed(1)}%**\n`;
      output += `- Longitud promedio de respuesta: ${coherence.avgLength.toFixed(0)} chars\n\n`;
    
      // 3. Capacidades
      const capabilities = await client.testCapabilities({ model: args.model });
    
      output += `## 🧠 Capacidades\n\n`;
      output += `| Área | Latencia | Velocidad |\n`;
      output += `|------|----------|----------|\n`;
      
      const areas = ["reasoning", "coding", "creative", "factual", "instruction"] as const;
      for (const area of areas) {
        const r = capabilities[area];
        output += `| ${area} | ${r.latencyMs}ms | ${r.tokensPerSecond.toFixed(1)} tok/s |\n`;
      }
    
      output += `\n## 📈 Puntuación General\n\n`;
      const avgSpeed = benchmark.summary.avgTokensPerSecond;
      const speedScore = Math.min(100, avgSpeed * 2);
      const coherenceScore = coherence.consistency * 100;
      const overallScore = (speedScore + coherenceScore) / 2;
    
      output += `- Velocidad: ${speedScore.toFixed(0)}/100\n`;
      output += `- Coherencia: ${coherenceScore.toFixed(0)}/100\n`;
      output += `- **Puntuación Total: ${overallScore.toFixed(0)}/100**\n`;
    
      return { content: [{ type: "text" as const, text: output }] };
    },
  • MCP tool schema definition for 'llm_quality_report', including name, description, and input schema (model ID and optional connection properties). Part of the exported 'tools' array.
      {
        name: "llm_quality_report",
        description: "Genera un reporte completo de calidad del modelo incluyendo benchmark, coherencia y capacidades",
        inputSchema: {
          type: "object" as const,
          properties: {
            ...connectionProperties,
            model: { type: "string", description: "ID del modelo a evaluar" },
          },
          required: [],
        },
      },
    ];
  • src/index.ts:42-44 (registration)
    Registers the list tools handler which returns the array of tool definitions including llm_quality_report.
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return { tools };
    });
  • src/index.ts:76-77 (registration)
    Dispatch case in the CallToolRequestHandler that routes execution to the llm_quality_report handler.
    case "llm_quality_report":
      return await toolHandlers.llm_quality_report(args as any);
  • Zod schema used for type inference in the llm_quality_report handler arguments, matching the tool's input schema.
    export const CapabilitiesSchema = ConnectionConfigSchema.extend({
      model: z.string().optional().describe("ID del modelo a usar"),
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool generates a report but doesn't describe what the report contains, its format, whether it's saved or returned, execution time, rate limits, or authentication needs beyond what's implied by the parameters. For a tool with no annotations and potentially complex behavior, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence in Spanish that directly states the tool's purpose and scope. It's appropriately sized and front-loaded with the main action. However, it could be slightly more structured by explicitly mentioning it's a comprehensive report that aggregates multiple evaluation aspects.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of generating a quality report with multiple components (benchmark, coherence, capabilities), no annotations, no output schema, and sibling tools that handle individual aspects, the description is incomplete. It doesn't explain what the output looks like, how it differs from using the specialized tools, or any behavioral details. This leaves significant gaps for an AI agent to understand the tool's full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (baseURL, apiKey, model) with descriptions. The description adds no additional meaning about the parameters beyond what the schema provides. According to the rules, with high schema coverage (>80%), the baseline score is 3 even with no param info in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Genera un reporte completo de calidad del modelo' (Generates a complete quality report for the model). It specifies the verb 'genera' and resource 'reporte de calidad del modelo', and mentions the report includes 'benchmark, coherencia y capacidades' (benchmark, coherence, and capabilities). However, it doesn't explicitly differentiate from sibling tools like llm_benchmark, llm_evaluate_coherence, or llm_test_capabilities, which appear to cover similar aspects individually.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools such as llm_benchmark, llm_evaluate_coherence, or llm_test_capabilities, which seem to handle specific components of the quality report. There's no indication of prerequisites, context, or exclusions for using this comprehensive report tool over the more specialized ones.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ramgeart/llm-mcp-bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server