Skip to main content
Glama
metrxbots

Metrx MCP Server

by metrxbots

Attribute Task to Outcome

metrx_attribute_task

Link agent tasks to business outcomes for ROI tracking. Maps agent actions to measurable results like revenue, cost savings, efficiency, or quality improvements.

Instructions

Link an agent task/event to a business outcome for ROI tracking. This creates a mapping between agent actions and measurable business results. Do NOT use for reading attribution data — use get_attribution_report or get_task_roi.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
agent_idYesThe agent UUID to attribute
event_idNoOptional: specific event/task ID to attribute
outcome_typeYesType of outcome
outcome_sourceYesSource of the outcome data
value_centsNoOutcome value in cents
descriptionNoOptional description of the outcome

Implementation Reference

  • Full registration of the 'attribute_task' tool (which becomes 'metrx_attribute_task' after namespace prefix is applied). Includes tool name, description, input schema validation (agent_id, event_id, outcome_type, outcome_source, value_cents, description), annotations, and the handler function.
    // ── attribute_task ──
    server.registerTool(
      'attribute_task',
      {
        title: 'Attribute Task to Outcome',
        description:
          'Link an agent task/event to a business outcome for ROI tracking. ' +
          'This creates a mapping between agent actions and measurable business results. ' +
          'Do NOT use for reading attribution data — use get_attribution_report or get_task_roi.',
        inputSchema: {
          agent_id: z.string().uuid().describe('The agent UUID to attribute'),
          event_id: z.string().optional().describe('Optional: specific event/task ID to attribute'),
          outcome_type: z
            .enum(['revenue', 'cost_saving', 'efficiency', 'quality'])
            .describe('Type of outcome'),
          outcome_source: z
            .enum(['stripe', 'calendly', 'hubspot', 'zendesk', 'webhook', 'manual'])
            .describe('Source of the outcome data'),
          value_cents: z.number().int().optional().describe('Outcome value in cents'),
          description: z.string().optional().describe('Optional description of the outcome'),
        },
        annotations: {
          readOnlyHint: false,
          destructiveHint: false,
          idempotentHint: false,
          openWorldHint: false,
        },
      },
      async ({ agent_id, event_id, outcome_type, outcome_source, value_cents, description }) => {
        const body: Record<string, unknown> = {
          agent_id,
          outcome_type,
          outcome_source,
        };
    
        if (event_id) body.event_id = event_id;
        if (value_cents !== undefined) body.value_cents = value_cents;
        if (description) body.description = description;
    
        const result = await client.post<AttributionResponse>('/outcomes', body);
    
        if (result.error) {
          return {
            content: [{ type: 'text', text: `Error attributing task: ${result.error}` }],
            isError: true,
          };
        }
    
        const outcome = result.data!;
        const lines: string[] = ['## Task Attributed Successfully', ''];
        lines.push(`- **Outcome Type**: ${outcome.outcome_type}`);
        lines.push(`- **Source**: ${outcome.outcome_source}`);
        if (outcome.value_cents) {
          const formatted = (outcome.value_cents / 100).toFixed(2);
          lines.push(`- **Value**: $${formatted}`);
        }
        if (outcome.description) {
          lines.push(`- **Description**: ${outcome.description}`);
        }
        lines.push(`- **Created**: ${new Date(outcome.created_at).toLocaleString()}`);
    
        return {
          content: [{ type: 'text', text: lines.join('\n') }],
        };
      }
    );
  • The handler function that executes the tool logic. Accepts attribution parameters, builds the request body, calls the API client to POST to /outcomes endpoint, and formats a success/error response with formatted output including outcome type, source, value, and creation timestamp.
    async ({ agent_id, event_id, outcome_type, outcome_source, value_cents, description }) => {
      const body: Record<string, unknown> = {
        agent_id,
        outcome_type,
        outcome_source,
      };
    
      if (event_id) body.event_id = event_id;
      if (value_cents !== undefined) body.value_cents = value_cents;
      if (description) body.description = description;
    
      const result = await client.post<AttributionResponse>('/outcomes', body);
    
      if (result.error) {
        return {
          content: [{ type: 'text', text: `Error attributing task: ${result.error}` }],
          isError: true,
        };
      }
    
      const outcome = result.data!;
      const lines: string[] = ['## Task Attributed Successfully', ''];
      lines.push(`- **Outcome Type**: ${outcome.outcome_type}`);
      lines.push(`- **Source**: ${outcome.outcome_source}`);
      if (outcome.value_cents) {
        const formatted = (outcome.value_cents / 100).toFixed(2);
        lines.push(`- **Value**: $${formatted}`);
      }
      if (outcome.description) {
        lines.push(`- **Description**: ${outcome.description}`);
      }
      lines.push(`- **Created**: ${new Date(outcome.created_at).toLocaleString()}`);
    
      return {
        content: [{ type: 'text', text: lines.join('\n') }],
      };
    }
  • Input schema validation using zod. Defines required fields: agent_id (UUID), outcome_type (enum: revenue, cost_saving, efficiency, quality), outcome_source (enum: stripe, calendly, hubspot, zendesk, webhook, manual); optional fields: event_id, value_cents (integer), description.
    inputSchema: {
      agent_id: z.string().uuid().describe('The agent UUID to attribute'),
      event_id: z.string().optional().describe('Optional: specific event/task ID to attribute'),
      outcome_type: z
        .enum(['revenue', 'cost_saving', 'efficiency', 'quality'])
        .describe('Type of outcome'),
      outcome_source: z
        .enum(['stripe', 'calendly', 'hubspot', 'zendesk', 'webhook', 'manual'])
        .describe('Source of the outcome data'),
      value_cents: z.number().int().optional().describe('Outcome value in cents'),
      description: z.string().optional().describe('Optional description of the outcome'),
    },
  • src/index.ts:74-103 (registration)
    Namespace prefix wrapper that automatically adds 'metrx_' prefix to all registered tools. When 'attribute_task' is registered, it becomes 'metrx_attribute_task'. Also applies rate limiting middleware to all tool calls.
    // ── Rate limiting middleware + metrx_ namespace prefix ──
    // All tools are registered exclusively as metrx_{name}.
    // The metrx_ prefix namespaces our tools to avoid collisions when
    // multiple MCP servers are used together.
    const METRX_PREFIX = 'metrx_';
    const originalRegisterTool = server.registerTool.bind(server);
    (server as any).registerTool = function (
      name: string,
      config: any,
      handler: (...handlerArgs: any[]) => Promise<any>
    ) {
      const wrappedHandler = async (...handlerArgs: any[]) => {
        if (!rateLimiter.isAllowed(name)) {
          return {
            content: [
              {
                type: 'text' as const,
                text: `Rate limit exceeded for tool '${name}'. Maximum 60 requests per minute allowed.`,
              },
            ],
            isError: true,
          };
        }
        return handler(...handlerArgs);
      };
    
      // Register with metrx_ prefix (only — no deprecated aliases)
      const prefixedName = name.startsWith(METRX_PREFIX) ? name : `${METRX_PREFIX}${name}`;
      originalRegisterTool(prefixedName, config, wrappedHandler);
    };
  • Type definition for AttributionResponse interface that defines the structure of the API response: id, agent_id, event_id (optional), outcome_type, outcome_source, value_cents (optional), description (optional), and created_at timestamp.
    interface AttributionResponse {
      id: string;
      agent_id: string;
      event_id?: string;
      outcome_type: string;
      outcome_source: string;
      value_cents?: number;
      description?: string;
      created_at: string;
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide basic hints (readOnly=false, destructive=false, etc.), but the description adds valuable context about the tool's purpose ('creates a mapping between agent actions and measurable business results') and clarifies it's for creation rather than reading. However, it doesn't disclose additional behavioral traits like rate limits, authentication needs, or error conditions that would be helpful beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and action, the second provides critical usage guidance. Every sentence earns its place with no wasted words, and the most important information (what it does and when not to use it) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation operation with 6 parameters) and lack of output schema, the description provides good contextual completeness by clarifying the tool's purpose and usage boundaries. However, it could benefit from mentioning what happens after the mapping is created or any limitations, though the annotations cover basic behavioral hints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what the schema provides, such as explaining relationships between parameters or usage nuances. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Link an agent task/event to a business outcome') and resource ('for ROI tracking'), with explicit differentiation from sibling tools ('Do NOT use for reading attribution data — use get_attribution_report or get_task_roi'). This provides a precise verb+resource combination that distinguishes it from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Link an agent task/event to a business outcome for ROI tracking') and when not to use it ('Do NOT use for reading attribution data'), with named alternatives ('get_attribution_report or get_task_roi'). This gives clear context for selection versus sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/metrxbots/metrx-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server