Skip to main content
Glama

count_tokens

Count tokens in text across multiple LLM models to check context window usage and compare costs before sending to an LLM.

Instructions

Count tokens for any text across multiple LLM models and get per-model cost estimates. Use before sending text to an LLM to check context window usage or compare costs across models. Supports GPT-4o, GPT-4, GPT-3.5-turbo, Claude 3.5 Sonnet, Claude 3 Opus, and 10+ more.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYesThe text to count tokens for
modelsNoModels to count tokens for

Implementation Reference

  • The "count_tokens" tool is registered here. It uses the helper function `callToolApi` to forward the request to the `token-counter` endpoint of the Agent Toolbelt API.
    // ----- Tool: Token Counter -----
    server.registerTool(
      "count_tokens",
      {
        title: "Token Counter",
        description:
          "Count tokens for any text across multiple LLM models and get per-model cost estimates. " +
          "Use before sending text to an LLM to check context window usage or compare costs across models. " +
          "Supports GPT-4o, GPT-4, GPT-3.5-turbo, Claude 3.5 Sonnet, Claude 3 Opus, and 10+ more.",
        inputSchema: {
          text: z.string().describe("The text to count tokens for"),
          models: z
            .array(z.string())
            .default(["gpt-4o", "claude-3-5-sonnet"])
            .describe("Models to count tokens for"),
        },
      },
      async ({ text, models }) => {
        const result = await callToolApi("token-counter", { text, models });
        const data = result as any;
        const r = data.result;
    
        const lines = [
          `**Characters:** ${r.characterCount.toLocaleString()}`,
          `**Words:** ${r.wordCount.toLocaleString()}`,
          "",
          "**Token counts:**",
        ];
    
        for (const [model, info] of Object.entries(r.results) as any) {
          const approx = info.approximate ? " (approx)" : "";
          const cost = info.estimatedCost
            ? ` | input ~$${info.estimatedCost.input} / output ~$${info.estimatedCost.output}`
            : "";
          lines.push(`  ${model}: **${info.tokens.toLocaleString()} tokens**${approx}${cost}`);
        }
    
        return { content: [{ type: "text" as const, text: lines.join("\n") }] };
      }
    );

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marras0914/agent-toolbelt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server