Skip to main content
Glama
metrxbots

Metrx MCP Server

by metrxbots

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
METRX_API_KEYYesYour Metrx API key (get one free at metrxbot.com/settings/security)
METRX_API_URLNoOverride API base URL (default: https://metrxbot.com/api/v1)https://metrxbot.com/api/v1

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
metrx_get_cost_summary

Get a comprehensive cost summary for your AI agent fleet. Returns total spend, call counts, error rates, agent breakdown, revenue attribution (if available), and optimization opportunities. Use this as the starting point for understanding your agent economics. Do NOT use for real-time per-request cost checking — use OpenTelemetry spans for that.

metrx_list_agents

List all AI agents in your organization with their status, category, and cost. Optionally filter by status or category. Returns agent IDs needed for other tools. Do NOT use for detailed per-agent analysis — use get_agent_detail for that.

metrx_get_agent_detail

Get detailed information about a specific agent including its model, framework, category, outcome configuration, and failure risk score. Do NOT use for fleet-wide overviews — use get_cost_summary instead.

metrx_get_optimization_recommendations

Get AI-powered cost optimization recommendations for a specific agent or your entire fleet. Returns actionable suggestions including model switching, token guardrails, provider arbitrage, batch processing opportunities, and revenue intelligence insights. Each suggestion includes estimated monthly savings and confidence level. Do NOT use for implementing fixes — use apply_optimization for one-click fixes or create_model_experiment to validate first.

metrx_apply_optimization

Apply a one-click optimization recommendation to an agent. Only works for suggestions marked as "one_click: true". Common optimizations include setting max_tokens limits and switching models. Do NOT use for unvalidated changes — run create_model_experiment first if unsure about impact.

metrx_route_model

Get a model routing recommendation for a specific task based on complexity. Uses the agent's historical performance data and cost analysis to suggest the optimal model for each task complexity level. Helps reduce costs by routing simple tasks to cheaper models while keeping complex tasks on premium models. Do NOT use for comparing all models at once — use compare_models for static pricing.

metrx_compare_models

Compare LLM model pricing and capabilities across providers. Returns pricing per 1M tokens, context window sizes, batch/cache support, and cost savings estimates for switching from a current model to alternatives. Works without any usage data (Day 0 value). Do NOT use for agent-specific recommendations — use get_optimization_recommendations which factors in actual usage patterns.

metrx_get_budget_status

Get the current status of all budget configurations. Shows spending vs limits, warning/exceeded counts, and enforcement modes. Use this to monitor spending governance across your agent fleet. Do NOT use for creating/changing budgets — use set_budget or update_budget_mode.

metrx_set_budget

Create or update a budget configuration for an agent or the entire organization. Budgets enforce spending limits with configurable enforcement modes: "alert_only" (notify but don't block), "soft_block" (block with override), or "hard_block" (strict enforcement). Specify limits in dollars. Do NOT use just to change enforcement mode — use update_budget_mode for that.

metrx_update_budget_mode

Change the enforcement mode of an existing budget or pause/resume it. Use "alert_only" for monitoring, "soft_block" for overridable limits, or "hard_block" for strict enforcement. Do NOT use to create new budgets — use set_budget for that.

metrx_get_alerts

Get active alerts and notifications for your agent fleet. Includes cost spikes, error rate increases, budget warnings, and system health notifications. Optionally filter by severity. Do NOT use for configuring alert triggers — use configure_alert_threshold for that.

metrx_acknowledge_alert

Mark one or more alerts as read/acknowledged. This removes them from the unread alerts list but preserves them in history. Do NOT use for resolving the underlying issue — take action on the alert first.

metrx_get_failure_predictions

Get predictive failure analysis for your agents. Shows upcoming risk of error rate breaches, latency degradation, cost overruns, rate limit risks, and budget exhaustion. Each prediction includes confidence level and recommended actions. Do NOT use for current/past failures — use get_alerts for active issues.

metrx_create_model_experiment

Start an A/B test comparing two LLM models for a specific agent. Routes a percentage of traffic to the treatment model and tracks cost, latency, error rate, and quality metrics. The experiment runs until statistical significance is reached or the max duration expires. Do NOT use for one-off model comparisons — use compare_models for static pricing data.

metrx_get_experiment_results

Get the current results of a model routing experiment. Shows sample counts, metric comparisons, statistical significance, and the current winner (if determined). Do NOT use for starting experiments — use create_model_experiment.

metrx_stop_experiment

Stop a running model routing experiment. The experiment results are preserved. If the treatment model won, you can optionally promote it as the new default. Do NOT use for pausing experiments temporarily — stopping is permanent.

metrx_run_cost_leak_scan

Run a comprehensive cost leak audit across your entire agent fleet. Identifies 7 types of cost inefficiencies: idle agents, model overprovisioning, missing caching, high error rates, context bloat, missing budgets, and cross-provider arbitrage opportunities (covers anthropic, cohere, google, mistral, openai). Returns a scored report with fix recommendations and estimated monthly savings. Supports output_format="json" for machine-readable output in CI/CD pipelines. Do NOT use as a continuous monitoring loop — use configure_alert_threshold for ongoing monitoring. Do NOT use for fixing leaks — use apply_optimization for one-click fixes.

metrx_attribute_task

Link an agent task/event to a business outcome for ROI tracking. This creates a mapping between agent actions and measurable business results. Do NOT use for reading attribution data — use get_attribution_report or get_task_roi.

metrx_get_task_roi

Calculate return on investment for an agent. Shows total costs (LLM API calls), total outcomes (attributed business value), ROI multiplier, and breakdown by model and outcome type. Useful for identifying which agents generate the most value per dollar spent. Do NOT use for fleet-wide ROI — use generate_roi_audit for that.

metrx_get_attribution_report

Get attribution report showing which agent actions led to business outcomes. Shows outcome counts, total values, confidence scores, and top contributing agents. Do NOT use for board-level reporting — use generate_roi_audit for formal audit reports.

metrx_get_upgrade_justification

Generate an ROI report explaining why an upgrade from Free to Lite/Pro tier makes sense. Analyzes current usage patterns, calculates optimization potential at higher tiers, and provides a structured upgrade recommendation with projected monthly savings. Do NOT use if already on Lite or Pro tier — not relevant for paid users.

metrx_configure_alert_threshold

Set up cost or operational alert thresholds for a specific agent or org-wide. Alerts can trigger email notifications, webhooks, or automatically pause the agent. Use for real-time cost governance and operational safety. Thresholds run server-side automatically. Do NOT use for viewing current alerts — use get_alerts instead.

metrx_generate_roi_audit

Generate a comprehensive ROI audit report for your AI agent fleet. Includes per-agent cost/revenue breakdown, attribution confidence scores, optimization opportunities, and risk flags. Suitable for board reporting and compliance. Do NOT use for quick per-agent ROI checks — use get_task_roi for individual agents.

Prompts

Interactive templates invoked by user choice

NameDescription
analyze-costsGet a comprehensive overview of your AI agent costs including spend breakdown, top-spending agents, error rates, and optimization opportunities.
find-savingsDiscover optimization opportunities across your AI agent fleet. Identifies model downgrades, caching opportunities, and routing improvements.
cost-leak-scanScan for waste patterns in your AI agent operations — retry storms, oversized contexts, model mismatch, and missing caching.

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/metrxbots/metrx-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server