Skip to main content
Glama

relay_run_get

Retrieve detailed execution data for a specific workflow run, including step outputs and trace information, to monitor and analyze AI orchestration processes.

Instructions

Get full details of a specific run including all step outputs and trace URL.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
runIdYesThe run ID to retrieve

Implementation Reference

  • Core implementation of the 'relay_run_get' tool handler. Fetches run data by ID, formats timestamps and trace URL, handles missing runs.
    export async function relayRunGet(
      input: RelayRunGetInput
    ): Promise<RelayRunGetResponse> {
      const run = getRunById(input.runId);
      const config = getConfig();
    
      if (!run) {
        return {
          found: false,
          error: `Run with ID "${input.runId}" not found. Note: Run history is stored in memory and clears on server restart.`,
        };
      }
    
      return {
        found: true,
        run: {
          runId: run.runId,
          type: run.type,
          name: run.type === 'workflow' ? run.workflowName : undefined,
          model: run.type === 'single' ? run.model : undefined,
          success: run.success,
          startTime: run.startTime.toISOString(),
          endTime: run.endTime.toISOString(),
          durationMs: run.durationMs,
          usage: run.usage,
          input: run.input,
          output: run.output,
          steps: run.steps,
          error: run.error,
          traceUrl: `${config.traceUrlBase}/${run.runId}`,
          contextReduction: run.contextReduction,
        },
      };
    }
  • Input schema (Zod), input type, and response interface for the tool.
    export const relayRunGetSchema = z.object({
      runId: z.string().describe('The run ID to retrieve'),
    });
    
    export type RelayRunGetInput = z.infer<typeof relayRunGetSchema>;
    
    export interface RelayRunGetResponse {
      found: boolean;
      run?: {
        runId: string;
        type: 'single' | 'workflow';
        name?: string;
        model?: string;
        success: boolean;
        startTime: string;
        endTime: string;
        durationMs: number;
        usage: {
          promptTokens: number;
          completionTokens: number;
          totalTokens: number;
          estimatedProviderCostUsd: number;
        };
        input?: any;
        output?: any;
        steps?: Record<string, any>;
        error?: string;
        traceUrl: string;
        contextReduction?: string;
      };
      error?: string;
    }
  • MCP tool definition including name, description, and JSON schema for input validation.
    export const relayRunGetDefinition = {
      name: 'relay_run_get',
      description: 'Get full details of a specific run including all step outputs and trace URL.',
      inputSchema: {
        type: 'object' as const,
        properties: {
          runId: {
            type: 'string',
            description: 'The run ID to retrieve',
          },
        },
        required: ['runId'],
      },
    };
  • src/server.ts:59-67 (registration)
    Registers the tool by including its definition in the TOOLS array returned by listTools MCP handler.
    const TOOLS = [
      relayModelsListDefinition,
      relayRunDefinition,
      relayWorkflowRunDefinition,
      relayWorkflowValidateDefinition,
      relaySkillsListDefinition,
      relayRunsListDefinition,
      relayRunGetDefinition,
    ];
  • Server-side dispatch for the tool call: parses input with schema and invokes the handler function.
    case 'relay_run_get': {
      const parsed = relayRunGetSchema.parse(args);
      result = await relayRunGet(parsed);
      break;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool retrieves 'full details' including 'step outputs and trace URL', which adds some context about what information is returned. However, it doesn't describe other behavioral traits, such as whether this is a read-only operation (implied by 'Get' but not stated), error handling, rate limits, or authentication needs. For a tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Get full details of a specific run') and adds specific inclusions ('including all step outputs and trace URL'). There is no wasted text, and every word earns its place by clarifying the tool's scope and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieving detailed run data) and lack of annotations and output schema, the description is partially complete. It specifies what details are included (step outputs, trace URL), which helps, but doesn't cover other aspects like return format, error cases, or how it differs from siblings. Without an output schema, more detail on the response would be beneficial, but the description provides a basic understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'runId' documented as 'The run ID to retrieve'. The description doesn't add any meaning beyond this, as it doesn't explain where to obtain the run ID or its format. With high schema coverage, the baseline is 3, and the description doesn't compensate with extra param details, so it meets the minimum viable level.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get full details of a specific run including all step outputs and trace URL.' It specifies the verb ('Get'), resource ('run'), and scope ('full details'), distinguishing it from siblings like 'relay_runs_list' (which likely lists runs) and 'relay_run' (which might be more basic). However, it doesn't explicitly differentiate from 'relay_workflow_run', which could be a similar tool for workflows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a run ID from another tool, or compare it to siblings like 'relay_run' or 'relay_workflow_run'. The context is implied (use when you have a run ID and want detailed info), but no explicit usage rules or exclusions are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RelayPlane/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server