Skip to main content
Glama
temurkhan13

openclaw-cost-tracker-mcp

by temurkhan13

openclaw-cost-tracker-mcp

Token-cost telemetry for OpenClaw, queryable from any Claude or MCP-aware agent. Per-agent + per-provider attribution, anomaly detection on cost spikes, model-routing recommendations, 30-day forecast. Companion to silentwatch-mcp and openclaw-health-mcp.

Status: v1.0.0 License: MIT MCP PyPI


What it does

Production AI deployments rack up token cost across multiple providers, dozens of agents, hundreds of skill calls. The bill arrives at month-end — and by then, the answer to "where did our money go?" is buried in 100k log lines. This MCP server surfaces that data live, queryable in plain English from inside Claude:

> claude: where did our LLM spend go this week?
[MCP tools: cost_overview + top_cost_drivers]

Total spend last 7 days: $42.18
By provider:
  Anthropic    $30.40 (72%) — claude-sonnet-4 dominates
  OpenAI       $9.20  (22%) — chat-bot agent
  Gemini       $1.86  (4%)  — cron-summarizer (cheap-route working)
  Ollama       $0.00  (local, free)

Top 3 cost drivers:
  data-extraction-agent      $28.50 (68%)
  chat-bot                   $9.20  (22%)
  cron-summarizer            $1.86  (4%)

1 anomaly flagged — see find_cost_anomalies for details.
> claude: any cheaper-routing opportunities?
[MCP tool: model_routing_recommendations]

Recommendation: data-extraction-agent currently runs claude-sonnet-4
with avg 400-token completions — extraction-style work that
gemini-2.5-flash usually handles at ~95% quality for ~5% the cost.
Estimated savings: $27.10/30d if migrated. Test on a 10% slice first.

Why openclaw-cost-tracker-mcp

Three things existing tools (provider billing dashboards, generic FinOps tools, custom scripts) don't do:

  1. Per-agent attribution, not just per-provider totals. Provider dashboards show "$X spent on Anthropic" — they can't tell you which of your six agents drove 78% of that. Cost tracker reads OpenClaw's per-request cost-log JSONL and aggregates with the agent_id intact.

  2. Cost-spike anomaly detection per agent. A single 120k-token paste into chat costs more than a week of normal traffic. The default 3x-median-per-agent threshold flags those before they show up in the month-end bill.

  3. Routing recommendations grounded in actual usage. Generic "use cheaper models" advice is useless. This server identifies specific agents whose volume + completion-length pattern suggests a cheaper provider would deliver the same outcome, with concrete 30-day savings estimates.

Built for the production-AI operator running OpenClaw in production with real spend that matters.


Tool surface

Tool

What it returns

cost_overview

Total spend + by-provider + top agents + top models + anomaly count for a window

costs_by_agent

Per-agent breakdown with avg-cost-per-request + share of total

costs_by_provider

Per-provider breakdown with token counts

find_cost_anomalies

Requests flagged as 3x+ above their agent's median cost

top_cost_drivers

Top N spending agents + models, no other noise

model_routing_recommendations

Specific cheaper-model suggestions with 30d savings estimates

forecast_monthly_cost

Projects 30-day total + per-provider with confidence note

Resources:

  • cost://overview — 7-day snapshot

  • cost://forecast — 30-day projection

  • cost://anomalies — recent flagged anomalies

Prompts:

  • diagnose-cost-spike — walk a recent spike to its root cause + corrective action

  • weekly-cost-digest — 200-word weekly cost digest


Quickstart

Install

pip install openclaw-cost-tracker-mcp

Configure for Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "openclaw-cost": {
      "command": "python",
      "args": ["-m", "openclaw_cost_tracker_mcp"],
      "env": {
        "OPENCLAW_COST_BACKEND": "mock"
      }
    }
  }
}

Backends

Backend

Status

Description

mock

✅ v1.0

Sample data with deliberate anomalies + routing opportunities for protocol verification

openclaw-jsonl

✅ v1.0

Parses OpenClaw's native cost-log JSONL files (default ~/.openclaw/cost-logs/, configurable via OPENCLAW_COST_LOGS)

provider-direct

⏳ v1.1

Reads Anthropic + OpenAI billing APIs directly (no log shim required)

JSONL log format

Each line is one JSON record:

{"request_id":"req-abc123","timestamp":"2026-05-04T12:34:56Z","provider":"anthropic","model":"claude-sonnet-4","agent_id":"data-extraction-agent","skill_id":"extract-structured-data","prompt_tokens":8500,"completion_tokens":600,"total_tokens":9100,"cost_usd":0.0345,"duration_ms":4823}

If your OpenClaw deployment doesn't emit this format, wrap your provider calls with a small logging shim — sample shim in examples/.


Roadmap

Version

Scope

Status

v1.0

mock + openclaw-jsonl backends, 7 tools / 3 resources / 2 prompts, anomaly detection + routing + forecast, GitHub Actions CI matrix, PyPI Trusted Publishing

v1.1

provider-direct backend (Anthropic + OpenAI billing API integrations)

v1.2

Backend federation; budget alerts + threshold breach detection

v1.x

Per-channel cost attribution; webhook emitter for budget alerts


Need this adapted to your stack?

If your AI deployment doesn't use OpenClaw's cost-log format — different agent harness, custom logging, AWS Bedrock metering, vendor billing API — and you want the same attribution + anomaly + routing visibility, that's a Custom MCP Build engagement.

Tier

Scope

Investment

Timeline

Simple

Single backend adapter for your existing cost-data source

$8,000–$10,000

1–2 weeks

Standard

Custom backend + custom anomaly rules + integration with your alerting

$15,000–$20,000

2–4 weeks

Complex

Multi-backend federation + budget enforcement + custom routing logic

$25,000–$35,000

4–8 weeks

To engage:

  1. Email temur@pixelette.tech with subject Custom MCP Build inquiry

  2. Include: a 1-paragraph description of your stack + which tier you're considering

  3. Reply within 2 business days with a 30-min discovery call slot

This server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp (cron silent-failure detection) and openclaw-health-mcp (deployment health). Install all three for full operational visibility.


Production AI audits

If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (cost overruns being one of the most common), and write the corrective-action plan:

Tier

Scope

Investment

Timeline

Audit Lite

One system, top-5 findings, written report

$1,500

1 week

Audit Standard

Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up

$3,000

2–3 weeks

Audit + Workshop

Standard audit + 2-day team workshop + first monthly audit included

$7,500

3–4 weeks

Same email channel: temur@pixelette.tech with subject AI audit inquiry.


Contributing

PRs welcome. Backends are pluggable — see src/openclaw_cost_tracker_mcp/backends/ for the contract.

To add a new backend:

  1. Subclass CostBackend in backends/<your_backend>.py

  2. Implement get_entries()

  3. Register in backends/__init__.py

  4. Add tests in tests/test_backend_<your_backend>.py

Bug reports + feature requests: open a GitHub issue.


License

MIT — see LICENSE.



Built by Temur Khan — independent practitioner on production AI systems. Contact: temur@pixelette.tech

Install Server
A
license - permissive license
A
quality
C
maintenance

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/temurkhan13/openclaw-cost-tracker-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server