Skip to main content
Glama
rollbar

Rollbar MCP Server

Official
by rollbar

get-replay

Retrieve session replay data from Rollbar to analyze user interactions and diagnose application errors by specifying environment, session, and replay identifiers.

Instructions

Get replay data for a specific session replay in Rollbar

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
environmentYesEnvironment name (e.g., production)
sessionIdYesSession identifier that owns the replay
replayIdYesReplay identifier to retrieve
deliveryNoHow to return the replay payload. Defaults to 'file' (writes JSON to a temp file); 'resource' returns a rollbar:// link.
projectNoProject name (optional when only one project is configured)

Implementation Reference

  • The tool "get-replay" is defined and registered here within the registerGetReplayTool function. It handles fetching replay data from Rollbar and returning it either as a file or a resource link.
    export function registerGetReplayTool(server: McpServer) {
      server.tool(
        "get-replay",
        "Get replay data for a specific session replay in Rollbar",
        {
          environment: z
            .string()
            .min(1)
            .describe("Environment name (e.g., production)"),
          sessionId: z
            .string()
            .min(1)
            .describe("Session identifier that owns the replay"),
          replayId: z.string().min(1).describe("Replay identifier to retrieve"),
          delivery: DELIVERY_MODE.optional().describe(
            "How to return the replay payload. Defaults to 'file' (writes JSON to a temp file); 'resource' returns a rollbar:// link.",
          ),
          project: buildProjectParam(),
        },
        async ({ environment, sessionId, replayId, delivery, project }) => {
          const deliveryMode = delivery ?? "file";
          const { token, apiBase } = resolveProject(project);
    
          if (deliveryMode === "resource" && PROJECTS.length > 1) {
            throw new Error(
              'delivery="resource" is not supported when multiple projects are configured. Use delivery="file" and specify the project parameter instead.',
            );
          }
    
          const replayData = await fetchReplayData(
            environment,
            sessionId,
            replayId,
            token,
            apiBase,
          );
    
          const resourceUri = buildReplayResourceUri(
            environment,
            sessionId,
            replayId,
          );
    
          cacheReplayData(resourceUri, replayData);
    
          if (deliveryMode === "file") {
            const filePath = await writeReplayToFile(
              replayData,
              environment,
              sessionId,
              replayId,
            );
    
            return {
              content: [
                {
                  type: "text",
                  text: `Replay ${replayId} for session ${sessionId} in ${environment} saved to ${filePath}. This file is not automatically deleted—remove it when finished or rerun with delivery="resource" for a rollbar:// link.`,
                },
              ],
            };
          }
    
          return {
            content: [
              {
                type: "text",
                text: `Replay ${replayId} for session ${sessionId} in ${environment} is available as ${resourceUri}. Use read-resource to download the JSON payload.`,
              },
              {
                type: "resource_link",
                name: resourceUri,
                title: `Replay ${replayId}`,
                uri: resourceUri,
                description: buildResourceLinkDescription(
                  environment,
                  sessionId,
                  replayId,
                ),
                mimeType: "application/json",
              },
            ],
          };
        },
      );
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the read operation without disclosing behavioral traits. It omits crucial details about the delivery modes (file vs resource), payload size limits, or what format the replay data takes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single 9-word sentence is efficiently structured and front-loaded with the action verb. However, given the absence of annotations and output schema, the extreme brevity leaves critical behavioral context undocumented.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 5 parameters with complex delivery options and no output schema or annotations. The description fails to compensate by explaining return values, payload structure, or behavioral differences between 'file' and 'resource' delivery modes, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, documenting all 5 parameters including the enum values for 'delivery'. The description implies required identifiers ('specific session replay') but adds no semantic meaning beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb ('Get'), target resource ('replay data'), and scope ('specific session replay in Rollbar'). It distinguishes from siblings like list-items or get-deployments by specifying 'replay' context, though it could clarify relationship to get-item-details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('for a specific session replay') but provides no explicit when-to-use guidance, prerequisites for the identifiers, or comparison to sibling tools like get-item-details that might overlap in functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rollbar/rollbar-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server