tokencost_find_cheapest
Compare and identify the most cost-effective LLM models by filtering providers, context window requirements, and sorting by input, output, or combined pricing.
Instructions
Find the cheapest LLM models, optionally filtered by provider or minimum context window.
Args:
provider (string, optional): Filter by provider (e.g., "OpenAI", "Anthropic", "Google")
min_context (number, optional): Minimum context window size in tokens
sort_by (string, optional): Sort by "input", "output", or "combined" cost (default: "combined")
limit (number, optional): Number of results to return (default: 10, max: 30)
Returns: Ranked list of cheapest models with pricing details.
Examples:
{} → Top 10 cheapest models overall
{ provider: "OpenAI" } → Cheapest OpenAI models
{ min_context: 200000, sort_by: "input" } → Cheapest 200K+ context models by input price
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| provider | No | Filter by provider name (e.g., 'OpenAI', 'Anthropic') | |
| min_context | No | Minimum context window size in tokens | |
| sort_by | No | Sort by input, output, or combined cost | combined |
| limit | No | Number of results to return |
Implementation Reference
- src/index.ts:295-348 (handler)The handler function for tokencost_find_cheapest that filters models by provider and min_context, sorts by input/output/combined cost, and returns a ranked list of cheapest models in both markdown table and structured JSON format.
async ({ provider, min_context, sort_by, limit }) => { let filtered = [...models]; if (provider) { const p = provider.toLowerCase(); filtered = filtered.filter(m => m.provider.toLowerCase() === p); if (filtered.length === 0) { return { content: [{ type: "text", text: `No models found for provider "${provider}". Available: ${providers.join(", ")}` }], }; } } if (min_context) { filtered = filtered.filter(m => m.contextWindow >= min_context); if (filtered.length === 0) { return { content: [{ type: "text", text: `No models with context window >= ${min_context.toLocaleString()} tokens.` }], }; } } const sortFn = sort_by === "input" ? (a: ModelPricing, b: ModelPricing) => a.inputPer1M - b.inputPer1M : sort_by === "output" ? (a: ModelPricing, b: ModelPricing) => a.outputPer1M - b.outputPer1M : (a: ModelPricing, b: ModelPricing) => (a.inputPer1M + a.outputPer1M) - (b.inputPer1M + b.outputPer1M); filtered.sort(sortFn); const top = filtered.slice(0, limit); const lines = [ `# Cheapest Models${provider ? ` (${provider})` : ""}${min_context ? ` — ${formatContext(min_context)}+ context` : ""}`, `*Sorted by ${sort_by} cost*`, "", "| # | Model | Provider | Input/1M | Output/1M | Context |", "|---|-------|----------|----------|-----------|---------|", ]; top.forEach((m, i) => { lines.push(`| ${i + 1} | ${m.name} | ${m.provider} | $${m.inputPer1M} | $${m.outputPer1M} | ${formatContext(m.contextWindow)} |`); }); lines.push("", `*${filtered.length} total models matched. Data from [TokenCost](https://tokencost.app)*`); return { content: [{ type: "text", text: lines.join("\n") }], structuredContent: { models: top.map(modelToJSON), total_matched: filtered.length, filters: { provider: provider ?? null, min_context: min_context ?? null, sort_by }, }, }; } - src/index.ts:263-294 (schema)Tool registration and schema definition for tokencost_find_cheapest, including title, description, inputSchema with provider (optional string), min_context (optional number), sort_by (enum: input/output/combined with default 'combined'), and limit (1-30, default 10).
server.registerTool( "tokencost_find_cheapest", { title: "Find Cheapest Models", description: `Find the cheapest LLM models, optionally filtered by provider or minimum context window. Args: - provider (string, optional): Filter by provider (e.g., "OpenAI", "Anthropic", "Google") - min_context (number, optional): Minimum context window size in tokens - sort_by (string, optional): Sort by "input", "output", or "combined" cost (default: "combined") - limit (number, optional): Number of results to return (default: 10, max: 30) Returns: Ranked list of cheapest models with pricing details. Examples: - {} → Top 10 cheapest models overall - { provider: "OpenAI" } → Cheapest OpenAI models - { min_context: 200000, sort_by: "input" } → Cheapest 200K+ context models by input price`, inputSchema: { provider: z.string().optional().describe("Filter by provider name (e.g., 'OpenAI', 'Anthropic')"), min_context: z.number().int().min(0).optional().describe("Minimum context window size in tokens"), sort_by: z.enum(["input", "output", "combined"]).default("combined").describe("Sort by input, output, or combined cost"), limit: z.number().int().min(1).max(30).default(10).describe("Number of results to return"), }, annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false, }, }, - src/index.ts:30-33 (helper)formatContext helper function used by the handler to format context window sizes in human-readable format (K/M tokens).
function formatContext(n: number): string { if (n >= 1000000) return `${(n / 1000000).toFixed(1)}M`; return `${(n / 1000).toFixed(0)}K`; } - src/index.ts:35-46 (helper)modelToJSON helper function used by the handler to convert ModelPricing objects to JSON format for structured output.
function modelToJSON(m: ModelPricing) { return { id: m.id, name: m.name, provider: m.provider, input_per_1m_tokens: m.inputPer1M, output_per_1m_tokens: m.outputPer1M, context_window: m.contextWindow, max_output: m.maxOutput, ...(m.notes ? { notes: m.notes } : {}), }; }