tokentoll
Provides tools for detecting Google GenAI API calls (models.generate_content) in code and estimating their costs, with diff capabilities to compare cost impact between git refs.
Provides tools for detecting LangChain API calls (ChatOpenAI, ChatAnthropic, init_chat_model) in code and estimating their costs, with diff capabilities to compare cost impact between git refs.
Provides tools for detecting OpenAI API calls (chat.completions.create, responses.create) in code and estimating their costs, with diff capabilities to compare cost impact between git refs.
tokentoll
Catch LLM cost changes in code review. Infracost for LLM spend.
A CLI tool and GitHub Action that statically analyzes your code for LLM API calls, estimates their cost, and shows you the cost impact of every change in your terminal or as a PR comment. Zero runtime dependencies.
The Problem
A single model swap from gpt-4o-mini to gpt-4o increases costs 15x.
A new API call in a hot path can add $10,000/month to your bill.
These changes hide in normal code review.
tokentoll finds LLM API calls in your code, estimates their cost, and shows you the cost impact of every change before it hits production.
Quick Start
pip install tokentoll
# Scan current directory for LLM API calls and their costs
tokentoll scan .
# Show cost impact of your last commit
tokentoll diff HEAD~1
# Compare two branches
tokentoll diff main..feature-branchGitHub Action
name: LLM Cost Diff
on:
pull_request:
paths:
- "**.py"
permissions:
pull-requests: write
jobs:
cost-diff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: Jwrede/tokentoll@v0.6.1What It Detects
SDK | Patterns | Status |
OpenAI |
| Supported |
Anthropic |
| Supported |
Google GenAI |
| Supported |
LiteLLM |
| Supported |
LangChain |
| Supported |
Zhipu AI |
| Supported |
JS/TS SDKs | Planned |
Example Output
tokentoll scan
LLM API Calls Detected
============================================================
File: src/agents/summarizer.py
Line 42: openai client.chat.completions.create
Model: gpt-4o | Max tokens: 4096
Est. cost/call: $0.03 | Monthly (1000 calls/month per call site): $26.50
Line 78: openai client.chat.completions.create
Model: gpt-4o-mini | Max tokens: 1000
Est. cost/call: $0.000301 | Monthly (1000 calls/month per call site): $0.30
--
Total estimated monthly cost: $26.80
1000 calls/month per call sitetokentoll diff
LLM Cost Diff: main..feature-branch
============================================================
+ ADDED src/agents/rewriter.py:35
openai | Model: gpt-4o
Est. cost/call: $0.03 | Monthly: +$26.50
~ MODIFIED src/agents/summarizer.py:42
openai | Model: gpt-4o -> gpt-4o-mini
Est. cost/call: $0.03 -> $0.000301 | Monthly: -$26.20
--
Monthly cost impact: +$0.30
Added: 1 | Changed: 1 | Removed: 0
1000 calls/month per call siteHow It Works
Source Code (.py files)
|
v
+-------------+ +------------------+
| AST Scanner |---->| SDK Detectors |
| (ast.parse) | | OpenAI, Anthropic|
+-------------+ | Google, LiteLLM |
| LangChain |
+------------------+
|
v
+------------------+
| Pricing Engine |
| 2200+ models |
| Auto-cached |
+------------------+
|
+-----------+-----------+
| |
v v
+------------+ +-------------+
| Scan Report| | Diff Engine |
| (costs) | | (old vs new) |
+------------+ +-------------+
| |
v v
+------------+ +-------------+
| Table/JSON | | Table/JSON/ |
| | | PR Comment |
+------------+ +-------------+Parses Python files using the
astmodule to find LLM API callsMulti-pass constant propagation resolves model names through variables,
os.getenv()fallbacks, class attributes, constructor args, dict contents, and**kwargsunpackingLooks up pricing from a local cache (sourced from LiteLLM, 2200+ models)
For diff mode: compares calls between two git refs and computes the cost delta
Outputs a cost report as a table, JSON, or GitHub PR comment
CLI Reference
tokentoll scan [PATH...] [--format table|json|markdown] [--calls-per-month N] [--config PATH]
tokentoll diff [REF] [--base REF] [--head REF] [--format table|json|markdown|github-comment] [--config PATH]
tokentoll update # Update bundled pricing dataMCP Server
tokentoll includes an MCP (Model Context Protocol) server that lets Claude Code and other MCP hosts check the cost impact of LLM code changes directly from an agent conversation.
Install
pip install tokentoll[mcp]Register with Claude Code
claude mcp add --transport stdio tokentoll -- tokentoll-mcpTools
Tool | Description |
| Find LLM API calls in a directory and estimate monthly costs. Accepts a path and optional |
| Compare LLM costs between two git refs. Accepts |
Both tools return JSON output.
Example use case
Claude Code can check the cost impact of its own changes before committing.
For example, after swapping a model from gpt-4o to gpt-4o-mini, the agent
can call the diff tool against HEAD to verify the cost reduction before
creating the commit.
Pricing Data
Pricing is bundled and works offline. To update to the latest prices:
tokentoll updatePricing data is sourced from LiteLLM's model_prices_and_context_window.json
and covers 300+ models across OpenAI, Anthropic, Google, AWS Bedrock,
Azure, and more.
Dynamic Model Defaults
When tokentoll encounters a call where the model name is a variable it cannot resolve, it applies a sensible per-SDK default so you still get cost estimates:
SDK | Default Model |
OpenAI |
|
Anthropic |
|
Google GenAI |
|
LiteLLM |
|
LangChain |
|
Zhipu AI |
|
These defaults are shown as gpt-4o (default) in scan output. You can override
them per-project or per-path using a .tokentoll.yml config file (see below).
Configuration
Create a .tokentoll.yml in your project root to customize behavior.
tokentoll automatically finds this file by walking up from the scanned directory.
# Default model for all dynamic (unresolved) calls
default_model: gpt-4o
# Per-SDK defaults (override the built-in defaults above)
default_models:
openai: gpt-4o-mini
anthropic: claude-haiku-3-20240307
# Assumed calls per month per call site
calls_per_month: 5000
# Skip cost estimation entirely for dynamic (unresolved) models. When true,
# calls whose model name cannot be resolved statically are reported with no
# cost rather than priced against a default. Useful for projects that prefer
# silence over a guess.
skip_dynamic_models: false
# Exclude paths from scanning (prefix match or glob pattern)
exclude:
- tests/
- examples/
- docs/
- "*_test.py"
# Per-path overrides (longest prefix match)
overrides:
- path: src/agents/
default_model: gpt-4o
calls_per_month: 10000
- path: src/azure/
skip_dynamic_models: trueResolution order for dynamic model defaults: per-SDK config (default_models) >
generic config (default_model) > built-in SDK defaults.
You can also pass --config path/to/.tokentoll.yml to use a specific config file.
Token Estimation
By default, tokentoll estimates token counts using a characters/4 heuristic. For more accurate estimates, install tiktoken:
pip install tiktokenWhen tiktoken is available, tokentoll uses the correct tokenizer encoding for
each model. Unknown models fall back to cl100k_base. Tiktoken is lazy-loaded
and encoders are cached, so there is no startup penalty if you don't need it.
Smart Variable Resolution
Real codebases rarely pass model names as string literals. tokentoll's multi-pass constant propagation engine follows:
DEFAULT_MODEL = os.getenv("MODEL", "gpt-4o")
class Config:
model: str = DEFAULT_MODEL
config = Config()
kwargs = {"model": config.model, "max_tokens": 2000}
client.chat.completions.create(**kwargs)
# tokentoll resolves: model="gpt-4o", max_tokens=2000Variable assignments (
MODEL = "gpt-4o")os.getenv()/os.environ.get()fallback valuesFunction default parameters
Class attribute defaults
Constructor argument propagation
Dict literal and subscript contents
**kwargsunpacking
Roadmap
Context-aware call frequency (planned): infer calls/month from surrounding code (FastAPI route handlers = high traffic, scripts = low, loops = multiplied) instead of assuming uniform volume across all call sites.
JS/TS support (planned): detect LLM calls in JavaScript and TypeScript files.
Cost alerts: configurable thresholds that fail CI when a PR exceeds a cost delta.
Limitations
Cannot resolve models loaded from external config files or databases at runtime. These calls use per-SDK defaults (configurable via
.tokentoll.yml).Token estimates use a characters/4 heuristic unless tiktoken is installed.
Monthly estimates assume uniform call volume per call site (configurable via
--calls-per-month,.tokentoll.yml, or per-path overrides). Use theexcludeoption to skip test and example files.Python only for now (JS/TS support planned).
License
MIT
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jwrede/tokentoll'
If you have feedback or need assistance with the MCP directory API, please join our Discord server