Skip to main content
Glama
A7OM-AI

atom-mcp-server

by A7OM-AI

Get AIPI Index Benchmarks

get_index_benchmarks
Read-onlyIdempotent

Retrieve AI inference price benchmarks across modalities, channels, and tiers. Access market-wide pricing data including input, output, and cached costs to analyze trends and compare buying options.

Instructions

AIPI (ATOM Inference Price Index) — chained matched-model price benchmarks for AI inference.

Returns 14 benchmark indexes across four categories:

  • Modality (6): Text, Multimodal, Image, Audio, Video, Voice — what does this type of inference cost?

  • Channel (4): Model Developers, Cloud Marketplaces, Inference Platforms, Neoclouds — where should you buy?

  • Tier (3): Frontier, Budget, Reasoning — what's the premium for capability?

  • Special (1): Open-Source — how much cheaper is open-weight inference?

Each index includes input, cached input, and output pricing per period.

These are market-wide benchmarks, not individual vendor prices. Use them to understand where the market is and how it's moving.

Fully public — available to all tiers.

Examples:

  • "What's the current benchmark for text inference?" → index_category="Modality"

  • "Show me all AIPI indexes" → (no params)

  • "Neocloud pricing benchmark" → index_code="AIPI NCL GLB"

  • "Channel pricing comparison" → index_category="Channel"

  • "Open-source vs market pricing" → index_code="AIPI OSS GLB"

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
index_codeNoFilter by specific AIPI index code, e.g. 'AIPI TXT GLB', 'AIPI DEV GLB', 'AIPI OSS GLB'. Omit to see all indexes.
index_categoryNoFilter by index category: 'Modality', 'Channel', 'Tier', 'Special'
limitNoMaximum results to return (default 25)
_atom_api_keyNoYour ATOM API key for full access. Omit for free tier (redacted data).

Implementation Reference

  • The main handler function that executes the get_index_benchmarks tool logic. It filters index benchmarks by index_code and index_category, queries the database, and formats the response with benchmark data, summary statistics, and methodology information.
    export async function handleGetIndexBenchmarks(
      params: z.infer<z.ZodObject<typeof getIndexBenchmarksSchema>>,
      tier: Tier
    ) {
      // Index benchmarks are fully public — no tier gating
      const filters: string[] = [];
      if (params.index_code && params.index_code.trim() !== "")
        filters.push(`index_code=eq.${params.index_code.trim()}`);
      if (
        params.index_category &&
        params.index_category.trim() !== "" &&
        params.index_category !== "(any)"
      )
        filters.push(`index_category=ilike.*${params.index_category}*`);
    
      const rows = await queryTable<IndexValues>("index_values", filters, {
        order: "date.desc,index_code.asc",
        limit: params.limit,
      });
    
      if (rows.length === 0) {
        return {
          content: [
            {
              type: "text" as const,
              text: JSON.stringify({
                tool: "get_index_benchmarks",
                error: params.index_code
                  ? `No index found for '${params.index_code}'. Omit index_code to see all available indexes.`
                  : "No index data available.",
              }),
            },
          ],
        };
      }
    
      // Extract unique index codes for summary
      const indexCodes = [...new Set(rows.map((r) => r.index_code))];
      const dates = [...new Set(rows.map((r) => r.date))].sort().reverse();
    
      // Group by date for structured output
      const byDate: Record<string, IndexValues[]> = {};
      for (const row of rows) {
        if (!byDate[row.date]) byDate[row.date] = [];
        byDate[row.date].push(row);
      }
    
      // Format each entry
      const formatted = rows.map((r) => ({
        index_code: r.index_code,
        index_category: r.index_category,
        description: r.index_description,
        date: r.date,
        unit: r.unit,
        input_price: r.input_price,
        cached_price: r.cached_price,
        output_price: r.output_price,
        sku_count: r.sku_count,
      }));
    
      return {
        content: [
          {
            type: "text" as const,
            text: JSON.stringify(
              {
                tool: "get_index_benchmarks",
                tier,
                description:
                  "AIPI (ATOM Inference Price Index) — chained matched-model price benchmarks for AI inference.",
                summary: {
                  total_indexes: indexCodes.length,
                  indexes_available: indexCodes,
                  date_range: {
                    latest: dates[0],
                    earliest: dates[dates.length - 1],
                    total_periods: dates.length,
                  },
                },
                benchmarks: formatted,
                methodology:
                  "Chained matched-model index. Only SKUs present in consecutive periods are compared, eliminating composition bias. See https://a7om.com/methodology",
                source: "https://a7om.com",
              },
              null,
              2
            ),
          },
        ],
      };
    }
  • Zod schema defining the input parameters for get_index_benchmarks tool: optional index_code filter, optional index_category filter, and limit parameter with default of 25.
    export const getIndexBenchmarksSchema = {
      index_code: z
        .string()
        .optional()
        .describe(
          "Filter by specific AIPI index code, e.g. 'AIPI TXT GLB', 'AIPI DEV GLB', 'AIPI OSS GLB'. Omit to see all indexes."
        ),
      index_category: z
        .string()
        .optional()
        .describe(
          "Filter by index category: 'Modality', 'Channel', 'Tier', 'Special'"
        ),
      limit: z
        .coerce.number()
        .int()
        .min(1)
        .max(100)
        .default(25)
        .describe("Maximum results to return (default 25)"),
    };
  • src/server.ts:186-222 (registration)
    Tool registration where get_index_benchmarks is registered with the MCP server. Includes title, detailed description with examples, input schema composition, annotations (readOnly, idempotent), and the wrapper handler that resolves tier and calls handleGetIndexBenchmarks.
      server.registerTool(
        "get_index_benchmarks",
        {
          title: "Get AIPI Index Benchmarks",
          description: `AIPI (ATOM Inference Price Index) — chained matched-model price benchmarks for AI inference.
    
    Returns 14 benchmark indexes across four categories:
    - Modality (6): Text, Multimodal, Image, Audio, Video, Voice — what does this type of inference cost?
    - Channel (4): Model Developers, Cloud Marketplaces, Inference Platforms, Neoclouds — where should you buy?
    - Tier (3): Frontier, Budget, Reasoning — what's the premium for capability?
    - Special (1): Open-Source — how much cheaper is open-weight inference?
    
    Each index includes input, cached input, and output pricing per period.
    
    These are market-wide benchmarks, not individual vendor prices. Use them to understand where the market is and how it's moving.
    
    Fully public — available to all tiers.
    
    Examples:
      - "What's the current benchmark for text inference?" → index_category="Modality"
      - "Show me all AIPI indexes" → (no params)
      - "Neocloud pricing benchmark" → index_code="AIPI NCL GLB"
      - "Channel pricing comparison" → index_category="Channel"
      - "Open-source vs market pricing" → index_code="AIPI OSS GLB"`,
          inputSchema: { ...getIndexBenchmarksSchema, ...apiKeyField },
          annotations: {
            readOnlyHint: true,
            destructiveHint: false,
            idempotentHint: true,
            openWorldHint: false,
          },
        },
        async (params) => {
          const tier = await resolveTier(params._atom_api_key);
          return handleGetIndexBenchmarks(params, tier);
        }
      );
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true and destructiveHint=false, the description adds substantial behavioral context: the return structure (14 indexes across 4 categories), specific pricing components included (input, cached input, output), scope limitations (market-wide vs. individual), and availability ('Fully public — available to all tiers'). Does not mention rate limits or pagination beyond the limit parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-structured and front-loaded: opens with the AIPI definition, immediately states the 14-index return structure, breaks down the four categories with their analytical purposes, clarifies scope and availability, then provides actionable examples. Every sentence conveys unique information; no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description comprehensively details the return values (14 benchmarks across 4 categories with specific pricing components). Given the moderate complexity (4 optional parameters, domain-specific concepts) and presence of clear annotations, the description provides sufficient context for an agent to understand the full tool contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value by explaining the taxonomy of index_category (detailing that Modality has 6 types, Channel has 4, etc.) and providing concrete examples for index_code in the examples section ('AIPI TXT GLB', 'AIPI OSS GLB'). This semantic context helps agents understand valid values beyond the schema's type definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool as returning 'AIPI (ATOM Inference Price Index) — chained matched-model price benchmarks for AI inference.' It distinguishes from siblings by explicitly stating these are 'market-wide benchmarks, not individual vendor prices,' contrasting with tools like compare_prices or get_vendor_catalog that likely return specific vendor data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance through five concrete examples mapping natural language queries to specific parameter combinations. Also clarifies when NOT to use by stating these are not individual vendor prices, implicitly directing users to sibling tools for specific pricing queries. Includes clear intent: 'Use them to understand where the market is and how it's moving.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/A7OM-AI/atom-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server