Skip to main content
Glama
metrxbots

Metrx MCP Server

by metrxbots

Get Agent ROI

metrx_get_task_roi
Read-onlyIdempotent

Calculate ROI for individual agents by analyzing costs from LLM API calls versus attributed business value outcomes to identify high-value performers.

Instructions

Calculate return on investment for an agent. Shows total costs (LLM API calls), total outcomes (attributed business value), ROI multiplier, and breakdown by model and outcome type. Useful for identifying which agents generate the most value per dollar spent. Do NOT use for fleet-wide ROI — use generate_roi_audit for that.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
agent_idYesThe agent UUID to calculate ROI for
daysNoNumber of days to analyze (default: 30)

Implementation Reference

  • The main handler implementation for get_task_roi tool (registered as metrx_get_task_roi). Includes the async function that calculates agent ROI by fetching costs and outcomes from the API endpoint `/agents/{agent_id}/roi`, processes the data, and returns a formatted markdown report with cost breakdown, outcomes, and ROI multiplier.
    server.registerTool(
      'get_task_roi',
      {
        title: 'Get Agent ROI',
        description:
          'Calculate return on investment for an agent. Shows total costs (LLM API calls), ' +
          'total outcomes (attributed business value), ROI multiplier, and breakdown by model and outcome type. ' +
          'Useful for identifying which agents generate the most value per dollar spent. ' +
          'Do NOT use for fleet-wide ROI — use generate_roi_audit for that.',
        inputSchema: {
          agent_id: z.string().uuid().describe('The agent UUID to calculate ROI for'),
          days: z
            .number()
            .int()
            .min(1)
            .max(365)
            .default(30)
            .describe('Number of days to analyze (default: 30)'),
        },
        annotations: {
          readOnlyHint: true,
          destructiveHint: false,
          idempotentHint: true,
          openWorldHint: false,
        },
      },
      async ({ agent_id, days }) => {
        const periodDays = days ?? 30;
        const endDate = new Date().toISOString();
        const startDate = new Date(Date.now() - periodDays * 24 * 60 * 60 * 1000).toISOString();
    
        const result = await client.get<{
          costs: {
            total_microcents: number;
            by_model: Record<string, number>;
            avg_per_request: number;
          };
          outcomes: {
            count: number;
            total_value_cents: number;
            by_type: Record<string, number>;
          };
          roi_multiplier: number;
          weighted_avg_confidence: number;
        }>(`/agents/${agent_id}/roi`, {
          start_date: startDate,
          end_date: endDate,
        });
    
        if (result.error) {
          return {
            content: [{ type: 'text', text: `Error calculating ROI: ${result.error}` }],
            isError: true,
          };
        }
    
        const data = result.data!;
        const totalCostDollars = (data.costs.total_microcents / 1_000_000).toFixed(2);
        const totalOutcomeDollars = (data.outcomes.total_value_cents / 100).toFixed(2);
        const avgCostDollars = (data.costs.avg_per_request / 100).toFixed(4);
    
        const lines: string[] = [
          `## Agent ROI Analysis (Last ${periodDays} days)`,
          '',
          `### Costs: $${totalCostDollars}`,
          `- Average per request: $${avgCostDollars}`,
        ];
    
        // Cost by model breakdown
        const modelEntries = Object.entries(data.costs.by_model);
        if (modelEntries.length > 0) {
          lines.push('- By model:');
          for (const [model, microcents] of modelEntries) {
            lines.push(`  - ${model}: $${(microcents / 1_000_000).toFixed(2)}`);
          }
        }
    
        lines.push('', `### Outcomes: $${totalOutcomeDollars} (${data.outcomes.count} total)`);
    
        // Outcome by type breakdown
        const typeEntries = Object.entries(data.outcomes.by_type);
        if (typeEntries.length > 0) {
          lines.push('- By type:');
          for (const [type, cents] of typeEntries) {
            lines.push(`  - ${type}: $${(cents / 100).toFixed(2)}`);
          }
        }
    
        lines.push('', '### ROI');
        lines.push(`- **ROI Multiplier**: ${data.roi_multiplier.toFixed(2)}x`);
        lines.push(
          `- **Avg Attribution Confidence**: ${(data.weighted_avg_confidence * 100).toFixed(0)}%`
        );
    
        if (data.roi_multiplier >= 1) {
          lines.push(
            '',
            `> ✅ This agent generates $${data.roi_multiplier.toFixed(
              2
            )} in value for every $1 spent.`
          );
        } else if (data.roi_multiplier > 0) {
          lines.push(
            '',
            `> ⚠️ This agent returns ${(data.roi_multiplier * 100).toFixed(
              0
            )}¢ per $1 spent. Consider optimizing costs or improving outcome attribution.`
          );
        } else {
          lines.push('', `> 📊 No attributed outcomes yet. Connect outcomes to start measuring ROI.`);
        }
    
        return {
          content: [{ type: 'text', text: lines.join('\n') }],
        };
      }
    );
  • Input schema definition for get_task_roi tool using Zod validation. Defines required agent_id (UUID) and optional days parameter (1-365, default 30) for the ROI calculation period.
    inputSchema: {
      agent_id: z.string().uuid().describe('The agent UUID to calculate ROI for'),
      days: z
        .number()
        .int()
        .min(1)
        .max(365)
        .default(30)
        .describe('Number of days to analyze (default: 30)'),
    },
  • The registerAttributionTools function that registers the get_task_roi tool along with other attribution tools. The tool is registered at line 112 with the name 'get_task_roi' which gets prefixed with 'metrx_' in the main server initialization.
    export function registerAttributionTools(server: McpServer, client: MetrxApiClient): void {
      // ── attribute_task ──
      server.registerTool(
        'attribute_task',
        {
          title: 'Attribute Task to Outcome',
          description:
            'Link an agent task/event to a business outcome for ROI tracking. ' +
            'This creates a mapping between agent actions and measurable business results. ' +
            'Do NOT use for reading attribution data — use get_attribution_report or get_task_roi.',
          inputSchema: {
            agent_id: z.string().uuid().describe('The agent UUID to attribute'),
            event_id: z.string().optional().describe('Optional: specific event/task ID to attribute'),
            outcome_type: z
              .enum(['revenue', 'cost_saving', 'efficiency', 'quality'])
              .describe('Type of outcome'),
            outcome_source: z
              .enum(['stripe', 'calendly', 'hubspot', 'zendesk', 'webhook', 'manual'])
              .describe('Source of the outcome data'),
            value_cents: z.number().int().optional().describe('Outcome value in cents'),
            description: z.string().optional().describe('Optional description of the outcome'),
          },
          annotations: {
            readOnlyHint: false,
            destructiveHint: false,
            idempotentHint: false,
            openWorldHint: false,
          },
        },
        async ({ agent_id, event_id, outcome_type, outcome_source, value_cents, description }) => {
          const body: Record<string, unknown> = {
            agent_id,
            outcome_type,
            outcome_source,
          };
    
          if (event_id) body.event_id = event_id;
          if (value_cents !== undefined) body.value_cents = value_cents;
          if (description) body.description = description;
    
          const result = await client.post<AttributionResponse>('/outcomes', body);
    
          if (result.error) {
            return {
              content: [{ type: 'text', text: `Error attributing task: ${result.error}` }],
              isError: true,
            };
          }
    
          const outcome = result.data!;
          const lines: string[] = ['## Task Attributed Successfully', ''];
          lines.push(`- **Outcome Type**: ${outcome.outcome_type}`);
          lines.push(`- **Source**: ${outcome.outcome_source}`);
          if (outcome.value_cents) {
            const formatted = (outcome.value_cents / 100).toFixed(2);
            lines.push(`- **Value**: $${formatted}`);
          }
          if (outcome.description) {
            lines.push(`- **Description**: ${outcome.description}`);
          }
          lines.push(`- **Created**: ${new Date(outcome.created_at).toLocaleString()}`);
    
          return {
            content: [{ type: 'text', text: lines.join('\n') }],
          };
        }
      );
    
      // ── get_task_roi ──
      server.registerTool(
        'get_task_roi',
        {
          title: 'Get Agent ROI',
          description:
            'Calculate return on investment for an agent. Shows total costs (LLM API calls), ' +
            'total outcomes (attributed business value), ROI multiplier, and breakdown by model and outcome type. ' +
            'Useful for identifying which agents generate the most value per dollar spent. ' +
            'Do NOT use for fleet-wide ROI — use generate_roi_audit for that.',
          inputSchema: {
            agent_id: z.string().uuid().describe('The agent UUID to calculate ROI for'),
            days: z
              .number()
              .int()
              .min(1)
              .max(365)
              .default(30)
              .describe('Number of days to analyze (default: 30)'),
          },
          annotations: {
            readOnlyHint: true,
            destructiveHint: false,
            idempotentHint: true,
            openWorldHint: false,
          },
        },
        async ({ agent_id, days }) => {
          const periodDays = days ?? 30;
          const endDate = new Date().toISOString();
          const startDate = new Date(Date.now() - periodDays * 24 * 60 * 60 * 1000).toISOString();
    
          const result = await client.get<{
            costs: {
              total_microcents: number;
              by_model: Record<string, number>;
              avg_per_request: number;
            };
            outcomes: {
              count: number;
              total_value_cents: number;
              by_type: Record<string, number>;
            };
            roi_multiplier: number;
            weighted_avg_confidence: number;
          }>(`/agents/${agent_id}/roi`, {
            start_date: startDate,
            end_date: endDate,
          });
    
          if (result.error) {
            return {
              content: [{ type: 'text', text: `Error calculating ROI: ${result.error}` }],
              isError: true,
            };
          }
    
          const data = result.data!;
          const totalCostDollars = (data.costs.total_microcents / 1_000_000).toFixed(2);
          const totalOutcomeDollars = (data.outcomes.total_value_cents / 100).toFixed(2);
          const avgCostDollars = (data.costs.avg_per_request / 100).toFixed(4);
    
          const lines: string[] = [
            `## Agent ROI Analysis (Last ${periodDays} days)`,
            '',
            `### Costs: $${totalCostDollars}`,
            `- Average per request: $${avgCostDollars}`,
          ];
    
          // Cost by model breakdown
          const modelEntries = Object.entries(data.costs.by_model);
          if (modelEntries.length > 0) {
            lines.push('- By model:');
            for (const [model, microcents] of modelEntries) {
              lines.push(`  - ${model}: $${(microcents / 1_000_000).toFixed(2)}`);
            }
          }
    
          lines.push('', `### Outcomes: $${totalOutcomeDollars} (${data.outcomes.count} total)`);
    
          // Outcome by type breakdown
          const typeEntries = Object.entries(data.outcomes.by_type);
          if (typeEntries.length > 0) {
            lines.push('- By type:');
            for (const [type, cents] of typeEntries) {
              lines.push(`  - ${type}: $${(cents / 100).toFixed(2)}`);
            }
          }
    
          lines.push('', '### ROI');
          lines.push(`- **ROI Multiplier**: ${data.roi_multiplier.toFixed(2)}x`);
          lines.push(
            `- **Avg Attribution Confidence**: ${(data.weighted_avg_confidence * 100).toFixed(0)}%`
          );
    
          if (data.roi_multiplier >= 1) {
            lines.push(
              '',
              `> ✅ This agent generates $${data.roi_multiplier.toFixed(
                2
              )} in value for every $1 spent.`
            );
          } else if (data.roi_multiplier > 0) {
            lines.push(
              '',
              `> ⚠️ This agent returns ${(data.roi_multiplier * 100).toFixed(
                0
              )}¢ per $1 spent. Consider optimizing costs or improving outcome attribution.`
            );
          } else {
            lines.push('', `> 📊 No attributed outcomes yet. Connect outcomes to start measuring ROI.`);
          }
    
          return {
            content: [{ type: 'text', text: lines.join('\n') }],
          };
        }
      );
  • src/index.ts:74-103 (registration)
    Server initialization code that adds the 'metrx_' prefix to all tool names. The wrapper intercepts server.registerTool calls and automatically prefixes tool names (e.g., 'get_task_roi' becomes 'metrx_get_task_roi'), also applying rate limiting middleware.
    // ── Rate limiting middleware + metrx_ namespace prefix ──
    // All tools are registered exclusively as metrx_{name}.
    // The metrx_ prefix namespaces our tools to avoid collisions when
    // multiple MCP servers are used together.
    const METRX_PREFIX = 'metrx_';
    const originalRegisterTool = server.registerTool.bind(server);
    (server as any).registerTool = function (
      name: string,
      config: any,
      handler: (...handlerArgs: any[]) => Promise<any>
    ) {
      const wrappedHandler = async (...handlerArgs: any[]) => {
        if (!rateLimiter.isAllowed(name)) {
          return {
            content: [
              {
                type: 'text' as const,
                text: `Rate limit exceeded for tool '${name}'. Maximum 60 requests per minute allowed.`,
              },
            ],
            isError: true,
          };
        }
        return handler(...handlerArgs);
      };
    
      // Register with metrx_ prefix (only — no deprecated aliases)
      const prefixedName = name.startsWith(METRX_PREFIX) ? name : `${METRX_PREFIX}${name}`;
      originalRegisterTool(prefixedName, config, wrappedHandler);
    };
  • The MetrxApiClient.get method used by the handler to make authenticated GET requests to the Metrx API. Includes URL parameter construction, authorization headers, retry logic with exponential backoff, and error parsing.
    async get<T>(
      path: string,
      params?: Record<string, string | number | boolean>
    ): Promise<ApiResponse<T>> {
      const url = new URL(path, this.baseUrl);
      if (params) {
        for (const [key, value] of Object.entries(params)) {
          if (value !== undefined && value !== null) {
            url.searchParams.set(key, String(value));
          }
        }
      }
    
      try {
        const response = await this.fetchWithRetry(url.toString(), {
          method: 'GET',
          headers: {
            Authorization: `Bearer ${this.apiKey}`,
            'Content-Type': 'application/json',
            'X-MCP-Client': 'metrx-mcp-server/0.1.0',
          },
        });
    
        if (!response.ok) {
          const errorBody = await response.text().catch(() => '');
          const friendlyMessage = this.parseApiError(response.status, errorBody);
          return {
            error: friendlyMessage,
          };
        }
    
        const data = (await response.json()) as T | ApiResponse<T>;
    
        // API may return { data: T } or T directly
        if (data && typeof data === 'object' && 'data' in data) {
          return data as ApiResponse<T>;
        }
    
        return { data: data as T };
      } catch (err) {
        return {
          error: `Network error: ${err instanceof Error ? err.message : String(err)}. See ${API_DOCS_URL} for help`,
        };
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare this as read-only, non-destructive, and idempotent, so the agent knows it's a safe query operation. The description adds useful context about what the calculation includes (costs from LLM API calls, attributed business value) and its purpose (identifying which agents generate most value per dollar), which goes beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first explains what the tool calculates, second states its usefulness, third provides explicit usage guidance. Every sentence adds value with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only query tool with good annotations and complete schema coverage, the description provides sufficient context about what the tool does and when to use it. The main gap is the lack of output schema, so the agent doesn't know the exact return format, but the description gives a good overview of what information will be returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters (agent_id as UUID, days with range and default). The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates return on investment for an agent, specifying what it returns (total costs, total outcomes, ROI multiplier, breakdown by model and outcome type). It distinguishes from the sibling tool generate_roi_audit by explicitly stating this is for individual agents, not fleet-wide ROI.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (for agent-specific ROI) and when not to use it (for fleet-wide ROI, directing to generate_roi_audit instead). This helps the agent choose correctly between sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/metrxbots/metrx-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server