Skip to main content
Glama
metrxbots

Metrx MCP Server

by metrxbots

Generate ROI Audit Report

metrx_generate_roi_audit
Read-onlyIdempotent

Generate comprehensive ROI audit reports for AI agent fleets with cost/revenue breakdowns, optimization opportunities, and compliance-ready documentation.

Instructions

Generate a comprehensive ROI audit report for your AI agent fleet. Includes per-agent cost/revenue breakdown, attribution confidence scores, optimization opportunities, and risk flags. Suitable for board reporting and compliance. Do NOT use for quick per-agent ROI checks — use get_task_roi for individual agents.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
period_daysNoAnalysis period in days (7-365)
include_methodologyNoInclude methodology notes and caveats for auditors
agent_idsNoSpecific agent IDs to include. Omit for full fleet audit.

Implementation Reference

  • Main tool handler function that executes the ROI audit logic. Fetches dashboard data, processes agent summaries, calculates fleet-wide metrics, and formats the audit report for output.
    async ({ period_days, include_methodology, agent_ids }) => {
      // Fetch dashboard data with optimization info
      const dashResult = await client.get<{
        cost?: { total_cost_cents: number; total_calls: number };
        attribution?: {
          total_revenue_cents: number;
          total_outcomes: number;
          roi_multiplier: number | null;
          avg_confidence: number;
        };
        agents_list?: Array<{
          id: string;
          name: string;
          monthly_cost_cents: number;
          roi_multiplier: number | null;
          status: string;
        }>;
      }>('/dashboard', { days: String(period_days) });
    
      if (dashResult.error) {
        return {
          content: [{ type: 'text', text: `Error generating audit: ${dashResult.error}` }],
          isError: true,
        };
      }
    
      const dash = dashResult.data;
      const agents = dash?.agents_list ?? [];
      const filteredAgents = agent_ids ? agents.filter((a) => agent_ids.includes(a.id)) : agents;
    
      // Build per-agent summaries
      const agentSummaries: AgentROISummary[] = filteredAgents.map((a) => {
        const riskFlags: string[] = [];
        if (a.status === 'error') riskFlags.push('Agent in error state');
        if (a.roi_multiplier !== null && a.roi_multiplier < 1)
          riskFlags.push('ROI below break-even');
        if (!a.roi_multiplier) riskFlags.push('No attribution data');
    
        return {
          agent_id: a.id,
          agent_name: a.name,
          total_cost_cents: a.monthly_cost_cents ?? 0,
          total_revenue_cents: 0, // per-agent revenue requires separate fetch
          roi_multiplier: a.roi_multiplier,
          attribution_confidence: 0,
          outcome_count: 0,
          optimization_savings_cents: 0,
          risk_flags: riskFlags,
        };
      });
    
      const totalCost = dash?.cost?.total_cost_cents ?? 0;
      const totalRevenue = dash?.attribution?.total_revenue_cents ?? 0;
      const netROI = totalCost > 0 ? totalRevenue / totalCost : null;
    
      const report: ROIAuditReport = {
        generated_at: new Date().toISOString(),
        period_days,
        fleet_summary: {
          total_agents: filteredAgents.length,
          total_cost_cents: totalCost,
          total_revenue_cents: totalRevenue,
          net_roi_multiplier: netROI,
          avg_attribution_confidence: dash?.attribution?.avg_confidence ?? 0,
          total_optimization_savings_cents: 0,
        },
        agents: agentSummaries,
        methodology: include_methodology
          ? 'Revenue attribution uses a dual-confidence model combining cost confidence (data volume, recency) ' +
            'with quality confidence (outcome verification, source reliability). ' +
            'ROI = (Attributed Revenue - Cost) / Cost. ' +
            'Optimization savings are projected based on 30-day usage patterns and model pricing data. ' +
            'All monetary values are in cents.'
          : '',
        caveats: [
          'Attribution confidence reflects data quality, not guaranteed accuracy.',
          'Revenue figures include both confirmed and inferred outcomes (weighted by confidence).',
          'Optimization savings are estimates based on current pricing and may vary.',
          'Industry default values are used when user-configured outcomes are not available.',
        ],
      };
    
      // Format as readable text
      const lines: string[] = [
        `# ROI Audit Report`,
        `Generated: ${new Date(report.generated_at).toLocaleDateString()}`,
        `Period: Last ${period_days} days`,
        '',
        `## Fleet Summary`,
        `- Total Agents: ${report.fleet_summary.total_agents}`,
        `- Total Cost: $${(report.fleet_summary.total_cost_cents / 100).toFixed(2)}`,
        `- Total Revenue: $${(report.fleet_summary.total_revenue_cents / 100).toFixed(2)}`,
        `- Net ROI: ${netROI !== null ? `${netROI.toFixed(2)}x` : 'N/A (no attribution data)'}`,
        `- Avg Attribution Confidence: ${(
          report.fleet_summary.avg_attribution_confidence * 100
        ).toFixed(0)}%`,
        '',
        `## Per-Agent Breakdown`,
      ];
    
      for (const agent of agentSummaries) {
        lines.push(`\n### ${agent.agent_name}`);
        lines.push(`- Cost: $${(agent.total_cost_cents / 100).toFixed(2)}/mo`);
        lines.push(
          `- ROI: ${agent.roi_multiplier !== null ? `${agent.roi_multiplier.toFixed(2)}x` : 'N/A'}`
        );
        if (agent.risk_flags.length > 0) {
          lines.push(`- Risks: ${agent.risk_flags.join(', ')}`);
        }
      }
    
      if (include_methodology && report.methodology) {
        lines.push('', '## Methodology', report.methodology);
      }
    
      if (report.caveats.length > 0) {
        lines.push('', '## Caveats');
        for (const caveat of report.caveats) {
          lines.push(`- ${caveat}`);
        }
      }
    
      return {
        content: [{ type: 'text', text: lines.join('\n') }],
      };
    }
  • Type definitions for ROI audit data structures including AgentROISummary and ROIAuditReport interfaces that define the input/output shapes.
    interface AgentROISummary {
      agent_id: string;
      agent_name: string;
      total_cost_cents: number;
      total_revenue_cents: number;
      roi_multiplier: number | null;
      attribution_confidence: number;
      outcome_count: number;
      optimization_savings_cents: number;
      risk_flags: string[];
    }
    
    interface ROIAuditReport {
      generated_at: string;
      period_days: number;
      fleet_summary: {
        total_agents: number;
        total_cost_cents: number;
        total_revenue_cents: number;
        net_roi_multiplier: number | null;
        avg_attribution_confidence: number;
        total_optimization_savings_cents: number;
      };
      agents: AgentROISummary[];
      methodology: string;
      caveats: string[];
    }
  • Zod input schema validation for the tool parameters: period_days, include_methodology, and agent_ids.
    inputSchema: {
      period_days: z
        .number()
        .int()
        .min(7)
        .max(365)
        .default(30)
        .describe('Analysis period in days (7-365)'),
      include_methodology: z
        .boolean()
        .default(true)
        .describe('Include methodology notes and caveats for auditors'),
      agent_ids: z
        .array(z.string().uuid())
        .optional()
        .describe('Specific agent IDs to include. Omit for full fleet audit.'),
    },
  • Tool registration function that registers 'generate_roi_audit' with the MCP server, including title, description, and configuration.
    export function registerROIAuditTools(server: McpServer, client: MetrxApiClient): void {
      server.registerTool(
        'generate_roi_audit',
        {
          title: 'Generate ROI Audit Report',
          description:
            'Generate a comprehensive ROI audit report for your AI agent fleet. ' +
            'Includes per-agent cost/revenue breakdown, attribution confidence scores, ' +
            'optimization opportunities, and risk flags. Suitable for board reporting and compliance. ' +
            'Do NOT use for quick per-agent ROI checks — use get_task_roi for individual agents.',
          inputSchema: {
            period_days: z
              .number()
              .int()
              .min(7)
              .max(365)
              .default(30)
              .describe('Analysis period in days (7-365)'),
            include_methodology: z
              .boolean()
              .default(true)
              .describe('Include methodology notes and caveats for auditors'),
            agent_ids: z
              .array(z.string().uuid())
              .optional()
              .describe('Specific agent IDs to include. Omit for full fleet audit.'),
          },
          annotations: {
            readOnlyHint: true,
            destructiveHint: false,
            idempotentHint: true,
            openWorldHint: false,
          },
        },
  • src/index.ts:74-115 (registration)
    Server initialization and tool registration that applies the 'metrx_' prefix to all tools and calls registerROIAuditTools to register the ROI audit tool.
    // ── Rate limiting middleware + metrx_ namespace prefix ──
    // All tools are registered exclusively as metrx_{name}.
    // The metrx_ prefix namespaces our tools to avoid collisions when
    // multiple MCP servers are used together.
    const METRX_PREFIX = 'metrx_';
    const originalRegisterTool = server.registerTool.bind(server);
    (server as any).registerTool = function (
      name: string,
      config: any,
      handler: (...handlerArgs: any[]) => Promise<any>
    ) {
      const wrappedHandler = async (...handlerArgs: any[]) => {
        if (!rateLimiter.isAllowed(name)) {
          return {
            content: [
              {
                type: 'text' as const,
                text: `Rate limit exceeded for tool '${name}'. Maximum 60 requests per minute allowed.`,
              },
            ],
            isError: true,
          };
        }
        return handler(...handlerArgs);
      };
    
      // Register with metrx_ prefix (only — no deprecated aliases)
      const prefixedName = name.startsWith(METRX_PREFIX) ? name : `${METRX_PREFIX}${name}`;
      originalRegisterTool(prefixedName, config, wrappedHandler);
    };
    
    // ── Register all tool domains ──
    registerDashboardTools(server, apiClient);
    registerOptimizationTools(server, apiClient);
    registerBudgetTools(server, apiClient);
    registerAlertTools(server, apiClient);
    registerExperimentTools(server, apiClient);
    registerCostLeakDetectorTools(server, apiClient);
    registerAttributionTools(server, apiClient);
    registerUpgradeJustificationTools(server, apiClient);
    registerAlertConfigTools(server, apiClient);
    registerROIAuditTools(server, apiClient);
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, indicating safe, non-destructive, repeatable operations. The description adds context about the report's comprehensiveness and suitability for board reporting/compliance, which is valuable behavioral insight beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key features and explicit usage guidelines. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (generating comprehensive reports) and lack of output schema, the description provides good context about report contents and use cases. However, it doesn't detail the report format (e.g., PDF, JSON) or delivery method, which could be helpful for an agent invoking it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, but it implies the report includes per-agent breakdowns, attribution scores, optimizations, and risk flags, which contextualizes the parameters' purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('generate', 'audit') and resources ('ROI audit report', 'AI agent fleet'). It distinguishes from sibling tools by contrasting with 'get_task_roi' for individual agent checks, making the scope explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('comprehensive ROI audit', 'board reporting and compliance') and when not to use it ('Do NOT use for quick per-agent ROI checks'), with a clear alternative named ('use get_task_roi for individual agents').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/metrxbots/metrx-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server