Token Counter
count_tokensCount tokens in text across multiple LLM models to check context window usage and compare costs before sending to an LLM.
Instructions
Count tokens for any text across multiple LLM models and get per-model cost estimates. Use before sending text to an LLM to check context window usage or compare costs across models. Supports GPT-4o, GPT-4, GPT-3.5-turbo, Claude 3.5 Sonnet, Claude 3 Opus, and 10+ more.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text to count tokens for | |
| models | No | Models to count tokens for |