openclaw-cost-tracker-mcp
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@openclaw-cost-tracker-mcpshow me my top cost drivers this week"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
openclaw-cost-tracker-mcp
Token-cost telemetry for OpenClaw, queryable from any Claude or MCP-aware agent. Per-agent + per-provider attribution, anomaly detection on cost spikes, model-routing recommendations, 30-day forecast. Companion to silentwatch-mcp and openclaw-health-mcp.
What it does
Production AI deployments rack up token cost across multiple providers, dozens of agents, hundreds of skill calls. The bill arrives at month-end — and by then, the answer to "where did our money go?" is buried in 100k log lines. This MCP server surfaces that data live, queryable in plain English from inside Claude:
> claude: where did our LLM spend go this week?
[MCP tools: cost_overview + top_cost_drivers]
Total spend last 7 days: $42.18
By provider:
Anthropic $30.40 (72%) — claude-sonnet-4 dominates
OpenAI $9.20 (22%) — chat-bot agent
Gemini $1.86 (4%) — cron-summarizer (cheap-route working)
Ollama $0.00 (local, free)
Top 3 cost drivers:
data-extraction-agent $28.50 (68%)
chat-bot $9.20 (22%)
cron-summarizer $1.86 (4%)
1 anomaly flagged — see find_cost_anomalies for details.> claude: any cheaper-routing opportunities?
[MCP tool: model_routing_recommendations]
Recommendation: data-extraction-agent currently runs claude-sonnet-4
with avg 400-token completions — extraction-style work that
gemini-2.5-flash usually handles at ~95% quality for ~5% the cost.
Estimated savings: $27.10/30d if migrated. Test on a 10% slice first.Why openclaw-cost-tracker-mcp
Three things existing tools (provider billing dashboards, generic FinOps tools, custom scripts) don't do:
Per-agent attribution, not just per-provider totals. Provider dashboards show "$X spent on Anthropic" — they can't tell you which of your six agents drove 78% of that. Cost tracker reads OpenClaw's per-request cost-log JSONL and aggregates with the agent_id intact.
Cost-spike anomaly detection per agent. A single 120k-token paste into chat costs more than a week of normal traffic. The default 3x-median-per-agent threshold flags those before they show up in the month-end bill.
Routing recommendations grounded in actual usage. Generic "use cheaper models" advice is useless. This server identifies specific agents whose volume + completion-length pattern suggests a cheaper provider would deliver the same outcome, with concrete 30-day savings estimates.
Built for the production-AI operator running OpenClaw in production with real spend that matters.
Tool surface
Tool | What it returns |
| Total spend + by-provider + top agents + top models + anomaly count for a window |
| Per-agent breakdown with avg-cost-per-request + share of total |
| Per-provider breakdown with token counts |
| Requests flagged as 3x+ above their agent's median cost |
| Top N spending agents + models, no other noise |
| Specific cheaper-model suggestions with 30d savings estimates |
| Projects 30-day total + per-provider with confidence note |
Resources:
cost://overview— 7-day snapshotcost://forecast— 30-day projectioncost://anomalies— recent flagged anomalies
Prompts:
diagnose-cost-spike— walk a recent spike to its root cause + corrective actionweekly-cost-digest— 200-word weekly cost digest
Quickstart
Install
pip install openclaw-cost-tracker-mcpConfigure for Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"openclaw-cost": {
"command": "python",
"args": ["-m", "openclaw_cost_tracker_mcp"],
"env": {
"OPENCLAW_COST_BACKEND": "mock"
}
}
}
}Backends
Backend | Status | Description |
| ✅ v1.0 | Sample data with deliberate anomalies + routing opportunities for protocol verification |
| ✅ v1.0 | Parses OpenClaw's native cost-log JSONL files (default |
| ⏳ v1.1 | Reads Anthropic + OpenAI billing APIs directly (no log shim required) |
JSONL log format
Each line is one JSON record:
{"request_id":"req-abc123","timestamp":"2026-05-04T12:34:56Z","provider":"anthropic","model":"claude-sonnet-4","agent_id":"data-extraction-agent","skill_id":"extract-structured-data","prompt_tokens":8500,"completion_tokens":600,"total_tokens":9100,"cost_usd":0.0345,"duration_ms":4823}If your OpenClaw deployment doesn't emit this format, wrap your provider calls with a small logging shim — sample shim in examples/.
Roadmap
Version | Scope | Status |
v1.0 | mock + openclaw-jsonl backends, 7 tools / 3 resources / 2 prompts, anomaly detection + routing + forecast, GitHub Actions CI matrix, PyPI Trusted Publishing | ✅ |
v1.1 |
| ⏳ |
v1.2 | Backend federation; budget alerts + threshold breach detection | ⏳ |
v1.x | Per-channel cost attribution; webhook emitter for budget alerts | ⏳ |
Need this adapted to your stack?
If your AI deployment doesn't use OpenClaw's cost-log format — different agent harness, custom logging, AWS Bedrock metering, vendor billing API — and you want the same attribution + anomaly + routing visibility, that's a Custom MCP Build engagement.
Tier | Scope | Investment | Timeline |
Simple | Single backend adapter for your existing cost-data source | $8,000–$10,000 | 1–2 weeks |
Standard | Custom backend + custom anomaly rules + integration with your alerting | $15,000–$20,000 | 2–4 weeks |
Complex | Multi-backend federation + budget enforcement + custom routing logic | $25,000–$35,000 | 4–8 weeks |
To engage:
Email temur@pixelette.tech with subject
Custom MCP Build inquiryInclude: a 1-paragraph description of your stack + which tier you're considering
Reply within 2 business days with a 30-min discovery call slot
This server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp (cron silent-failure detection) and openclaw-health-mcp (deployment health). Install all three for full operational visibility.
Production AI audits
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (cost overruns being one of the most common), and write the corrective-action plan:
Tier | Scope | Investment | Timeline |
Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2–3 weeks |
Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3–4 weeks |
Same email channel: temur@pixelette.tech with subject AI audit inquiry.
Contributing
PRs welcome. Backends are pluggable — see src/openclaw_cost_tracker_mcp/backends/ for the contract.
To add a new backend:
Subclass
CostBackendinbackends/<your_backend>.pyImplement
get_entries()Register in
backends/__init__.pyAdd tests in
tests/test_backend_<your_backend>.py
Bug reports + feature requests: open a GitHub issue.
License
MIT — see LICENSE.
Related
silentwatch-mcp — cron silent-failure detection
openclaw-health-mcp — deployment health
AI Production Discipline Framework — Notion template, $29 — the methodology these MCP tools implement (cost overruns are pattern P3.x in the catalog)
SPEC.md — full server design
Built by Temur Khan — independent practitioner on production AI systems. Contact: temur@pixelette.tech
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/temurkhan13/openclaw-cost-tracker-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server