Skip to main content
Glama
metrxbots

Metrx MCP Server

by metrxbots

Get Attribution Report

metrx_get_attribution_report
Read-onlyIdempotent

Analyze which agent actions drive business outcomes by generating attribution reports with outcome counts, values, confidence scores, and top contributors.

Instructions

Get attribution report showing which agent actions led to business outcomes. Shows outcome counts, total values, confidence scores, and top contributing agents. Do NOT use for board-level reporting — use generate_roi_audit for formal audit reports.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
agent_idNoOptional: filter to specific agent (omit for fleet-wide)
daysNoNumber of days to include (default: 30)
modelNoAttribution model to use (default: direct)direct

Implementation Reference

  • The handler function for get_attribution_report that fetches attribution data from the API and formats it as a report showing outcome counts, values, confidence scores, and top contributing agents.
    async ({ agent_id, days, model }) => {
      const params: Record<string, string | number> = {
        days: days ?? 30,
        model: model ?? 'direct',
      };
      if (agent_id) params.agent_id = agent_id;
    
      const result = await client.get<AttributionReportResponse>('/outcomes', params);
    
      if (result.error) {
        return {
          content: [{ type: 'text', text: `Error fetching attribution report: ${result.error}` }],
          isError: true,
        };
      }
    
      const data = result.data!;
      const lines: string[] = [
        '## Attribution Report',
        `### Period: Last ${data.period_days} days | Model: ${data.model}`,
        '',
      ];
    
      if (data.agent_id) {
        lines.push(`**Agent**: ${data.agent_id}`, '');
      } else {
        lines.push('**Scope**: Fleet-wide (all agents)', '');
      }
    
      lines.push(`**Total Outcomes**: ${data.total_outcomes}`, '');
      const totalRevenue = (data.total_value_cents / 100).toFixed(2);
      lines.push(`**Total Value**: $${totalRevenue}`, '');
    
      if (data.outcomes.length === 0) {
        lines.push('No outcomes recorded in this period.');
        return {
          content: [{ type: 'text', text: lines.join('\n') }],
        };
      }
    
      lines.push('', '### Outcome Breakdown');
      for (const outcome of data.outcomes) {
        lines.push('', `#### ${outcome.outcome_type}`);
        lines.push(`- **Count**: ${outcome.count}`);
        const value = (outcome.value_cents / 100).toFixed(2);
        lines.push(`- **Total Value**: $${value}`);
        lines.push(`- **Confidence**: ${(outcome.confidence * 100).toFixed(0)}%`);
    
        if (outcome.top_attributions && outcome.top_attributions.length > 0) {
          lines.push('- **Top Agents**:');
          for (const attr of outcome.top_attributions) {
            const attrValue = (attr.contribution_value_cents / 100).toFixed(2);
            const attrConf = (attr.confidence * 100).toFixed(0);
            lines.push(`  - ${attr.agent_name} [${attr.agent_id}]: $${attrValue} (${attrConf}%)`);
          }
        }
      }
    
      return {
        content: [{ type: 'text', text: lines.join('\n') }],
      };
    }
  • Tool registration for get_attribution_report with title, description, input schema, annotations, and handler function. The tool is registered without prefix here but gets 'metrx_' added in index.ts.
    // ── get_attribution_report ──
    server.registerTool(
      'get_attribution_report',
      {
        title: 'Get Attribution Report',
        description:
          'Get attribution report showing which agent actions led to business outcomes. ' +
          'Shows outcome counts, total values, confidence scores, and top contributing agents. ' +
          'Do NOT use for board-level reporting — use generate_roi_audit for formal audit reports.',
        inputSchema: {
          agent_id: z
            .string()
            .uuid()
            .optional()
            .describe('Optional: filter to specific agent (omit for fleet-wide)'),
          days: z
            .number()
            .int()
            .min(1)
            .max(365)
            .default(30)
            .describe('Number of days to include (default: 30)'),
          model: z
            .enum(['direct', 'last_touch', 'first_touch'])
            .default('direct')
            .describe('Attribution model to use (default: direct)'),
        },
        annotations: {
          readOnlyHint: true,
          destructiveHint: false,
          idempotentHint: true,
          openWorldHint: false,
        },
      },
      async ({ agent_id, days, model }) => {
        const params: Record<string, string | number> = {
          days: days ?? 30,
          model: model ?? 'direct',
        };
        if (agent_id) params.agent_id = agent_id;
    
        const result = await client.get<AttributionReportResponse>('/outcomes', params);
    
        if (result.error) {
          return {
            content: [{ type: 'text', text: `Error fetching attribution report: ${result.error}` }],
            isError: true,
          };
        }
    
        const data = result.data!;
        const lines: string[] = [
          '## Attribution Report',
          `### Period: Last ${data.period_days} days | Model: ${data.model}`,
          '',
        ];
    
        if (data.agent_id) {
          lines.push(`**Agent**: ${data.agent_id}`, '');
        } else {
          lines.push('**Scope**: Fleet-wide (all agents)', '');
        }
    
        lines.push(`**Total Outcomes**: ${data.total_outcomes}`, '');
        const totalRevenue = (data.total_value_cents / 100).toFixed(2);
        lines.push(`**Total Value**: $${totalRevenue}`, '');
    
        if (data.outcomes.length === 0) {
          lines.push('No outcomes recorded in this period.');
          return {
            content: [{ type: 'text', text: lines.join('\n') }],
          };
        }
    
        lines.push('', '### Outcome Breakdown');
        for (const outcome of data.outcomes) {
          lines.push('', `#### ${outcome.outcome_type}`);
          lines.push(`- **Count**: ${outcome.count}`);
          const value = (outcome.value_cents / 100).toFixed(2);
          lines.push(`- **Total Value**: $${value}`);
          lines.push(`- **Confidence**: ${(outcome.confidence * 100).toFixed(0)}%`);
    
          if (outcome.top_attributions && outcome.top_attributions.length > 0) {
            lines.push('- **Top Agents**:');
            for (const attr of outcome.top_attributions) {
              const attrValue = (attr.contribution_value_cents / 100).toFixed(2);
              const attrConf = (attr.confidence * 100).toFixed(0);
              lines.push(`  - ${attr.agent_name} [${attr.agent_id}]: $${attrValue} (${attrConf}%)`);
            }
          }
        }
    
        return {
          content: [{ type: 'text', text: lines.join('\n') }],
        };
      }
    );
  • Input schema defining the parameters for get_attribution_report: optional agent_id (UUID), days (1-365, default 30), and model (enum: 'direct', 'last_touch', 'first_touch', default 'direct').
    inputSchema: {
      agent_id: z
        .string()
        .uuid()
        .optional()
        .describe('Optional: filter to specific agent (omit for fleet-wide)'),
      days: z
        .number()
        .int()
        .min(1)
        .max(365)
        .default(30)
        .describe('Number of days to include (default: 30)'),
      model: z
        .enum(['direct', 'last_touch', 'first_touch'])
        .default('direct')
        .describe('Attribution model to use (default: direct)'),
    },
  • Type definition for AttributionReportResponse which defines the structure of the API response including agent_id, period_days, model, total_outcomes, total_value_cents, and outcomes array with breakdown by type.
    interface AttributionReportResponse {
      agent_id?: string;
      period_days: number;
      model: string;
      total_outcomes: number;
      total_value_cents: number;
      outcomes: Array<{
        outcome_type: string;
        count: number;
        value_cents: number;
        confidence: number;
        top_attributions: Array<{
          agent_id: string;
          agent_name: string;
          contribution_value_cents: number;
          confidence: number;
        }>;
      }>;
    }
  • src/index.ts:74-103 (registration)
    The wrapper function that adds the 'metrx_' prefix to all registered tools. This transforms 'get_attribution_report' into 'metrx_get_attribution_report' and also wraps handlers with rate limiting.
    // ── Rate limiting middleware + metrx_ namespace prefix ──
    // All tools are registered exclusively as metrx_{name}.
    // The metrx_ prefix namespaces our tools to avoid collisions when
    // multiple MCP servers are used together.
    const METRX_PREFIX = 'metrx_';
    const originalRegisterTool = server.registerTool.bind(server);
    (server as any).registerTool = function (
      name: string,
      config: any,
      handler: (...handlerArgs: any[]) => Promise<any>
    ) {
      const wrappedHandler = async (...handlerArgs: any[]) => {
        if (!rateLimiter.isAllowed(name)) {
          return {
            content: [
              {
                type: 'text' as const,
                text: `Rate limit exceeded for tool '${name}'. Maximum 60 requests per minute allowed.`,
              },
            ],
            isError: true,
          };
        }
        return handler(...handlerArgs);
      };
    
      // Register with metrx_ prefix (only — no deprecated aliases)
      const prefixedName = name.startsWith(METRX_PREFIX) ? name : `${METRX_PREFIX}${name}`;
      originalRegisterTool(prefixedName, config, wrappedHandler);
    };
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it specifies the report's content (outcome counts, total values, confidence scores, top agents) and usage constraints (not for formal audits). Annotations already cover safety (readOnlyHint=true, destructiveHint=false) and idempotency, so the bar is lower, but the description enhances understanding of what the tool returns and its limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific details and usage guidelines in just two sentences. Every sentence adds value: the first explains what the tool does and shows, the second provides critical usage differentiation. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (attribution reporting), rich annotations (readOnlyHint, idempotentHint), and 100% schema coverage, the description is largely complete. It adds context on report content and usage constraints. However, without an output schema, it could benefit from more detail on return format (e.g., structure of results), though the annotations provide safety assurance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters (agent_id, days, model). The description does not add any parameter-specific details beyond what the schema provides, such as explaining the attribution models or agent filtering implications. This meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get attribution report') and resources ('agent actions led to business outcomes'), and distinguishes it from siblings by explicitly naming an alternative tool (generate_roi_audit). It details what the report shows: outcome counts, total values, confidence scores, and top contributing agents.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: 'Do NOT use for board-level reporting — use generate_roi_audit for formal audit reports.' This clearly defines the context and exclusions, helping the agent choose appropriately among sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/metrxbots/metrx-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server