Skip to main content
Glama
ankit-aglawe

tokencost-mcp-server

by ankit-aglawe

tokencost_get_model_pricing

Retrieve token pricing details for specific AI models including input/output costs per million tokens, context window, and maximum output limits.

Instructions

Get pricing details for a specific LLM model.

Args:

  • model (string): Model ID or name to look up (e.g., "gpt-5", "claude-sonnet-4.6", "gemini-3-pro")

Returns: Model pricing details including input/output costs per 1M tokens, context window, and max output. Returns an error message if the model is not found, with suggestions for similar models.

Examples:

  • "gpt-5" → GPT-5 pricing from OpenAI

  • "claude-opus-4.6" → Claude Opus 4.6 pricing from Anthropic

  • "gemini" → First matching Gemini model

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesModel ID or name (e.g., 'gpt-5', 'claude-sonnet-4.6')

Implementation Reference

  • The async handler function for tokencost_get_model_pricing that executes the tool logic. It uses findModel() to look up a model by ID/name, handles not-found cases with suggestions via findModels(), and returns model pricing in both markdown and JSON formats.
    async ({ model }) => {
      const found = findModel(model);
      if (!found) {
        const similar = findModels(model);
        const suggestion = similar.length > 0
          ? `\n\nDid you mean: ${similar.slice(0, 5).map(m => m.id).join(", ")}?`
          : `\n\nAvailable providers: ${providers.join(", ")}. Use tokencost_list_models to see all models.`;
        return {
          content: [{ type: "text", text: `Model "${model}" not found.${suggestion}` }],
        };
      }
      return {
        content: [{ type: "text", text: modelToMarkdown(found) }],
        structuredContent: modelToJSON(found),
      };
    }
  • src/index.ts:63-106 (registration)
    The server.registerTool() call that registers the tokencost_get_model_pricing tool with MCP server. Includes tool name, title, description, inputSchema using Zod, annotations, and the handler function.
    server.registerTool(
      "tokencost_get_model_pricing",
      {
        title: "Get Model Pricing",
        description: `Get pricing details for a specific LLM model.
    
    Args:
      - model (string): Model ID or name to look up (e.g., "gpt-5", "claude-sonnet-4.6", "gemini-3-pro")
    
    Returns:
      Model pricing details including input/output costs per 1M tokens, context window, and max output.
      Returns an error message if the model is not found, with suggestions for similar models.
    
    Examples:
      - "gpt-5" → GPT-5 pricing from OpenAI
      - "claude-opus-4.6" → Claude Opus 4.6 pricing from Anthropic
      - "gemini" → First matching Gemini model`,
        inputSchema: {
          model: z.string().min(1).describe("Model ID or name (e.g., 'gpt-5', 'claude-sonnet-4.6')"),
        },
        annotations: {
          readOnlyHint: true,
          destructiveHint: false,
          idempotentHint: true,
          openWorldHint: false,
        },
      },
      async ({ model }) => {
        const found = findModel(model);
        if (!found) {
          const similar = findModels(model);
          const suggestion = similar.length > 0
            ? `\n\nDid you mean: ${similar.slice(0, 5).map(m => m.id).join(", ")}?`
            : `\n\nAvailable providers: ${providers.join(", ")}. Use tokencost_list_models to see all models.`;
          return {
            content: [{ type: "text", text: `Model "${model}" not found.${suggestion}` }],
          };
        }
        return {
          content: [{ type: "text", text: modelToMarkdown(found) }],
          structuredContent: modelToJSON(found),
        };
      }
    );
  • The Zod input schema for tokencost_get_model_pricing, defining a single 'model' parameter as a non-empty string with a description.
    inputSchema: {
      model: z.string().min(1).describe("Model ID or name (e.g., 'gpt-5', 'claude-sonnet-4.6')"),
    },
  • The ModelPricing TypeScript interface that defines the structure of model pricing data, including id, name, provider, inputPer1M, outputPer1M, contextWindow, maxOutput, and optional notes.
    export interface ModelPricing {
      id: string;
      name: string;
      provider: string;
      inputPer1M: number;
      outputPer1M: number;
      contextWindow: number;
      maxOutput: number;
      notes?: string;
    }
  • The findModel() and findModels() helper functions used by the handler to search for models by ID/name. findModel returns the first match, findModels returns all matching models.
    export function findModel(query: string): ModelPricing | undefined {
      const q = query.toLowerCase();
      return models.find(m => m.id === q || m.name.toLowerCase() === q) ??
        models.find(m => m.id.includes(q) || m.name.toLowerCase().includes(q));
    }
    
    export function findModels(query: string): ModelPricing[] {
      const q = query.toLowerCase();
      return models.filter(m =>
        m.id.includes(q) || m.name.toLowerCase().includes(q) || m.provider.toLowerCase().includes(q)
      );
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ankit-aglawe/tokencost-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server