The Metrx MCP Server is an AI Agent Cost Intelligence Platform that enables MCP-compatible AI agents to track, optimize, and govern LLM costs and ROI across their agent fleet.
Cost Visibility & Dashboards
Get comprehensive cost summaries (total spend, call counts, error rates) for up to 90 days
List all agents with status, category, and cost metrics
Drill into per-agent details including model, framework, and performance history
Cost Optimization
Receive AI-powered recommendations (model switching, token guardrails, provider arbitrage, batch processing)
Apply one-click optimization fixes to individual agents
Get model routing recommendations based on task complexity
Compare LLM pricing and capabilities across providers (OpenAI, Anthropic, Google, etc.)
Budget Management
Monitor budget status and spending vs. limits across your fleet
Create or update budgets with daily/monthly periods and enforcement modes (alert-only, soft block, hard block)
Pause, resume, or change enforcement modes on existing budgets
Alerts & Predictions
Retrieve active alerts for cost spikes, error rates, and budget warnings (filterable by severity)
Acknowledge or dismiss alerts
Get predictive failure analysis identifying agents at risk before issues occur
Configure automated alert thresholds triggering email, webhook, or auto-pause actions
A/B Model Experiments
Run A/B tests comparing two LLM models with configurable traffic splitting
Monitor results including statistical significance, cost delta, and current winner
Stop experiments and optionally promote the winning model
Cost Leak Detection
Run a 7-check audit identifying inefficiencies: idle agents, model overprovisioning, missing caching, high error rates, context bloat, missing budgets, and cross-provider arbitrage opportunities
Output as human-readable markdown or machine-readable JSON for CI/CD pipelines
ROI & Revenue Attribution
Link agent actions to business outcomes (revenue, cost savings, efficiency, quality) from sources like Stripe, HubSpot, Zendesk, etc.
Calculate per-agent ROI and generate multi-source attribution reports with confidence scores
Produce board-ready ROI audit reports for the full fleet
Upgrade Justification
Generate ROI reports demonstrating potential savings from upgrading to higher service tiers
Metrx MCP Server
Your AI agents are wasting money. Metrx finds out how much, and fixes it.
The official MCP server for Metrx — the AI Agent Cost Intelligence Platform. Give any MCP-compatible agent (Claude, GPT, Gemini, Cursor, Windsurf) the ability to track its own costs, detect waste, optimize model selection, and prove ROI.
Why Metrx?
Problem | What Metrx Does |
No visibility into agent spend | Real-time cost dashboards per agent, model, and provider |
Overpaying for LLM calls | Provider arbitrage finds cheaper models for the same task |
Runaway costs | Budget enforcement with auto-pause when limits are hit |
Wasted tokens | Cost leak scanner detects retry storms, context bloat, model mismatch |
Can't prove AI ROI | Revenue attribution links agent actions to business outcomes |
Quick Start
Try it now — no signup required
npx @metrxbot/mcp-server --demoThis starts the server with sample data so you can explore all 23 tools instantly.
Connect your real data
Option A — Interactive login (recommended):
npx @metrxbot/mcp-server --authOpens your browser to get an API key, validates it, and saves it to ~/.metrxrc so you never need to set env vars.
Option B — Environment variable:
METRX_API_KEY=sk_live_your_key_here npx @metrxbot/mcp-server --testGet your free API key at app.metrxbot.com/sign-up.
Add to your MCP client (Claude Desktop, Cursor, Windsurf)
If you used --auth, no env block is needed — the key is read from ~/.metrxrc automatically:
{
"mcpServers": {
"metrx": {
"command": "npx",
"args": ["@metrxbot/mcp-server"]
}
}
}Or pass the key explicitly via environment:
{
"mcpServers": {
"metrx": {
"command": "npx",
"args": ["@metrxbot/mcp-server"],
"env": {
"METRX_API_KEY": "sk_live_your_key_here"
}
}
}
}Remote HTTP endpoint
For remote agents (no local install needed):
POST https://metrxbot.com/api/mcp
Authorization: Bearer sk_live_your_key_here
Content-Type: application/jsonFrom npm
npm install @metrxbot/mcp-server23 Tools Across 10 Domains
Dashboard (3 tools)
Tool | Description |
| Comprehensive cost summary — total spend, call counts, error rates, and optimization opportunities |
| List all agents with status, category, cost metrics, and health indicators |
| Detailed agent info including model, framework, cost breakdown, and performance history |
Optimization (4 tools)
Tool | Description |
| AI-powered cost optimization recommendations per agent or fleet-wide |
| One-click apply an optimization recommendation to an agent |
| Model routing recommendation for a specific task based on complexity |
| Compare LLM model pricing and capabilities across providers |
Budgets (3 tools)
Tool | Description |
| Current status of all budget configurations with spend vs. limits |
| Create or update a budget with hard, soft, or monitor enforcement |
| Change enforcement mode of an existing budget or pause/resume it |
Alerts (3 tools)
Tool | Description |
| Active alerts and notifications for your agent fleet |
| Mark one or more alerts as read/acknowledged |
| Predictive failure analysis — identify agents likely to fail before it happens |
Experiments (3 tools)
Tool | Description |
| Start an A/B test comparing two LLM models with traffic splitting |
| Statistical significance, cost delta, and recommended action |
| Stop a running model routing experiment and lock in the winner |
Cost Leak Detector (1 tool)
Tool | Description |
| Comprehensive 7-check cost leak audit across your entire agent fleet |
Attribution (3 tools)
Tool | Description |
| Link agent actions to business outcomes for ROI tracking |
| Calculate return on investment for an agent — costs vs. attributed outcomes |
| Multi-source attribution report with confidence scores and top contributors |
Alert Configuration (1 tool)
Tool | Description |
| Set cost or operational alert thresholds with email, webhook, or auto-pause |
ROI Audit (1 tool)
Tool | Description |
| Board-ready ROI audit report for your AI agent fleet |
Upgrade Justification (1 tool)
Tool | Description |
| ROI report for tier upgrades based on current usage patterns |
Prompts
Pre-built prompt templates for common workflows:
Prompt | Description |
| Comprehensive cost overview — spend breakdown, top agents, optimization opportunities |
| Discover optimization opportunities — model downgrades, caching, routing |
| Scan for waste patterns — retry storms, oversized contexts, model mismatch |
Examples
"How much am I spending?"
User: What was my AI cost this week?
→ metrx_get_cost_summary(period_days=7)
Total Spend: $234.56 | Calls: 2,450 | Error Rate: 0.2%
├── customer-support: $156.23 (1,800 calls)
└── code-generator: $78.33 (650 calls)
💡 Switch customer-support from GPT-4 to Claude Sonnet: Save $42/week"Find me savings"
User: Am I overpaying for my agents?
→ metrx_compare_models(models=["gpt-4o", "claude-3-5-sonnet", "gemini-1.5-pro"])
Model Comparison (per 1M tokens):
├── gpt-4o: $2.50 in / $10.00 out
├── claude-3-5-sonnet: $3.00 in / $15.00 out
└── gemini-1.5-pro: $3.50 in / $10.50 out"Test a cheaper model"
User: Test Claude 3.5 Sonnet against my GPT-4 setup
→ metrx_create_model_experiment(agent_id="agent_123",
model_a="gpt-4o", model_b="claude-3-5-sonnet-20241022", traffic_split=10)
Experiment started: 90% GPT-4o, 10% Claude 3.5 Sonnet
Check back in 14 days for statistical significance.Companion Tool: Cost Leak Detector
This repo also includes @metrxbot/cost-leak-detector — a free, offline CLI that scans your LLM API logs for wasted spend. No signup, no cloud, no data leaves your machine.
npx @metrxbot/cost-leak-detector demoIt runs 7 checks (idle agents, premium model overuse, missing caching, high error rates, context overflow, no budgets, arbitrage opportunities) and gives you a scored report in seconds. See the full docs.
Configuration
API Key (required)
The server looks for your API key in this order:
METRX_API_KEYenvironment variable~/.metrxrcfile (created by--auth)
Run npx @metrxbot/mcp-server --auth to save your key, or set the env var directly.
Variable | Required | Description |
| Yes* | Your Metrx API key (get one free) |
| No | Override API base URL (default: |
*Not required if you've run --auth — the key is read from ~/.metrxrc automatically.
CLI Flags
Flag | Description |
| Start with sample data — no API key or signup needed |
| Interactive login — opens browser, validates key, saves to |
| Verify your API key and connection |
Rate Limiting
60 requests per minute per tool. For higher limits, contact support@metrxbot.com.
Development
git clone https://github.com/metrxbots/mcp-server.git
cd mcp-server
npm install
npm run typecheck
npm testContributing
See CONTRIBUTING.md for guidelines.
Links
Website: metrxbot.com
Docs: docs.metrxbot.com
npm: @metrxbot/mcp-server
Smithery: metrxbot/mcp-server
Support: support@metrxbot.com
A Note on Naming
The product is Metrx (metrxbot.com). The npm scope is @metrxbot and the Smithery listing is metrxbot/mcp-server. The GitHub organization is metrxbots (with an s) because metrxbot was already taken on GitHub. If you see metrxbot vs metrxbots across platforms, they're the same project — just a GitHub namespace constraint.
License
MIT — see LICENSE.
💬 Feedback
Did Metrx work for you? We'd love to hear it — good or bad.
GitHub Discussions: Start a thread — questions, ideas, what you're building
Bug reports: Open an issue
Quick feedback: Drop a comment on our Product Hunt listing
If you installed but hit a snag, tell us what happened — we read every report.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.