Skip to main content
Glama
ankit-aglawe

tokencost-mcp-server

by ankit-aglawe

tokencost_compare_models

Compare pricing across multiple LLM models side by side to analyze input/output costs, context windows, and relative cost differences.

Instructions

Compare pricing across multiple LLM models side by side.

Args:

  • models (string[]): Array of model IDs or names to compare (2-10 models)

Returns: Side-by-side comparison table with input/output costs, context windows, and relative cost differences.

Examples:

  • ["gpt-5", "claude-sonnet-4.6"] → Compare OpenAI vs Anthropic pricing

  • ["gpt-5-mini", "gemini-3-flash", "claude-haiku-4.5"] → Compare budget models

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelsYesModel IDs or names to compare

Implementation Reference

  • The handler function that executes the tokencost_compare_models tool logic. It takes an array of model queries, finds matching models using findModel(), calculates the cheapest input/output costs, generates a markdown comparison table with relative pricing multipliers, and returns both text and structured content with model pricing data.
    async ({ models: modelQueries }) => {
      const results: { query: string; model: ModelPricing | undefined }[] = modelQueries.map(q => ({
        query: q,
        model: findModel(q),
      }));
    
      const notFound = results.filter(r => !r.model);
      const found = results.filter(r => r.model).map(r => r.model!);
    
      if (found.length < 2) {
        return {
          content: [{
            type: "text",
            text: `Need at least 2 valid models to compare. Not found: ${notFound.map(r => r.query).join(", ")}. Use tokencost_list_models to see available models.`,
          }],
        };
      }
    
      const cheapestInput = Math.min(...found.map(m => m.inputPer1M));
      const cheapestOutput = Math.min(...found.map(m => m.outputPer1M));
    
      const lines = ["# Model Pricing Comparison", ""];
      lines.push("| Model | Provider | Input/1M | Output/1M | Context | Max Output |");
      lines.push("|-------|----------|----------|-----------|---------|------------|");
      for (const m of found) {
        const inputMult = m.inputPer1M / cheapestInput;
        const outputMult = m.outputPer1M / cheapestOutput;
        const inputNote = inputMult > 1 ? ` (${inputMult.toFixed(1)}x)` : " (cheapest)";
        const outputNote = outputMult > 1 ? ` (${outputMult.toFixed(1)}x)` : " (cheapest)";
        lines.push(`| ${m.name} | ${m.provider} | $${m.inputPer1M}${inputNote} | $${m.outputPer1M}${outputNote} | ${formatContext(m.contextWindow)} | ${formatContext(m.maxOutput)} |`);
      }
    
      if (notFound.length > 0) {
        lines.push("", `*Models not found: ${notFound.map(r => r.query).join(", ")}*`);
      }
      lines.push("", `*Data from [TokenCost](https://tokencost.app)*`);
    
      const structured = {
        models: found.map(modelToJSON),
        cheapest_input: found.reduce((a, b) => a.inputPer1M < b.inputPer1M ? a : b).id,
        cheapest_output: found.reduce((a, b) => a.outputPer1M < b.outputPer1M ? a : b).id,
        not_found: notFound.map(r => r.query),
      };
    
      return {
        content: [{ type: "text", text: lines.join("\n") }],
        structuredContent: structured,
      };
    }
  • src/index.ts:108-135 (registration)
    Tool registration for tokencost_compare_models using server.registerTool(). Defines the tool name, title, description, inputSchema with Zod validation (array of 2-10 model strings), and annotations for read-only/idempotent behavior.
    server.registerTool(
      "tokencost_compare_models",
      {
        title: "Compare Model Pricing",
        description: `Compare pricing across multiple LLM models side by side.
    
    Args:
      - models (string[]): Array of model IDs or names to compare (2-10 models)
    
    Returns:
      Side-by-side comparison table with input/output costs, context windows, and relative cost differences.
    
    Examples:
      - ["gpt-5", "claude-sonnet-4.6"] → Compare OpenAI vs Anthropic pricing
      - ["gpt-5-mini", "gemini-3-flash", "claude-haiku-4.5"] → Compare budget models`,
        inputSchema: {
          models: z.array(z.string().min(1))
            .min(2, "Provide at least 2 models to compare")
            .max(10, "Maximum 10 models per comparison")
            .describe("Model IDs or names to compare"),
        },
        annotations: {
          readOnlyHint: true,
          destructiveHint: false,
          idempotentHint: true,
          openWorldHint: false,
        },
      },
  • Input schema definition using Zod for the models parameter. Validates that models is an array of non-empty strings with minimum 2 and maximum 10 models per comparison request.
    inputSchema: {
      models: z.array(z.string().min(1))
        .min(2, "Provide at least 2 models to compare")
        .max(10, "Maximum 10 models per comparison")
        .describe("Model IDs or names to compare"),
    },
  • Type definition for ModelPricing interface that defines the data structure for model pricing data including id, name, provider, inputPer1M, outputPer1M, contextWindow, maxOutput, and optional notes fields.
    export interface ModelPricing {
      id: string;
      name: string;
      provider: string;
      inputPer1M: number;
      outputPer1M: number;
      contextWindow: number;
      maxOutput: number;
      notes?: string;
    }
  • Helper functions findModel() and findModels() used by the handler to search for models by query string. findModel() returns the first matching model, while findModels() returns all matching models for suggestions.
    export function findModel(query: string): ModelPricing | undefined {
      const q = query.toLowerCase();
      return models.find(m => m.id === q || m.name.toLowerCase() === q) ??
        models.find(m => m.id.includes(q) || m.name.toLowerCase().includes(q));
    }
    
    export function findModels(query: string): ModelPricing[] {
      const q = query.toLowerCase();
      return models.filter(m =>
        m.id.includes(q) || m.name.toLowerCase().includes(q) || m.provider.toLowerCase().includes(q)
      );
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ankit-aglawe/tokencost-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server