Skip to main content
Glama

voidfeed_benchmark_lookup

Look up model benchmark scores with confidence intervals for 247 model-benchmark combinations.

Instructions

Look up model benchmark scores. Surface tier: 8 entries. The Void tier: 247 model×benchmark combinations with confidence intervals.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNoModel name to look up
benchmarkNoBenchmark name (e.g. MMLU, HumanEval, MATH)

Implementation Reference

  • Schema definition for voidfeed_benchmark_lookup tool: accepts optional 'model' and 'benchmark' string parameters.
    {
      name: 'voidfeed_benchmark_lookup',
      description:
        'Look up model benchmark scores. Surface tier: 8 entries. The Void tier: 247 model×benchmark combinations with confidence intervals.',
      inputSchema: {
        type: 'object',
        properties: {
          model: { type: 'string', description: 'Model name to look up' },
          benchmark: { type: 'string', description: 'Benchmark name (e.g. MMLU, HumanEval, MATH)' },
        },
        required: [],
      },
    },
  • Handler for voidfeed_benchmark_lookup: builds query params from optional 'model' and 'benchmark' args and calls GET /v1/tools/benchmark-lookup on the VoidFeed API.
    case 'voidfeed_benchmark_lookup': {
      const params = new URLSearchParams();
      if (args.model) params.set('model', args.model);
      if (args.benchmark) params.set('benchmark', args.benchmark);
      return vfGet(`/v1/tools/benchmark-lookup?${params}`);
    }
  • index.js:258-276 (registration)
    Registration of the tool handler via MCP CallToolRequestSchema, which dispatches to handleTool() for all tools including voidfeed_benchmark_lookup.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args } = request.params;
      try {
        const result = await handleTool(name, args || {});
        return {
          content: [
            {
              type: 'text',
              text: JSON.stringify(result, null, 2),
            },
          ],
        };
      } catch (err) {
        return {
          content: [{ type: 'text', text: `Error: ${err.message}` }],
          isError: true,
        };
      }
    });
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description adds some context about benchmark scores including confidence intervals and specific entry counts (8 Surface, 247 Void), but it does not disclose read-only status, authentication needs, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that front-load the purpose. Every word adds value, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description mentions benchmark scores and confidence intervals but lacks details on output format, sorting, or pagination. It provides moderate completeness for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptive parameter names. The description does not add additional meaning beyond the schema's property descriptions, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool looks up model benchmark scores, mentioning specific tiers (Surface and Void) and counts, which differentiates it from siblings like voidfeed_model_compare (comparison) and voidfeed_catalog (catalog).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for looking up scores but does not explicitly state when to use this tool instead of alternatives like voidfeed_model_compare or voidfeed_catalog. No exclusions or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/voidfeedai-ops/voidfeed-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server