Skip to main content
Glama

replay_recording

Replay recorded Android device actions to automate repetitive tasks, using JSON files to control app interactions and UI sequences.

Instructions

Replay a previously recorded action sequence.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
recording_pathYesFile path to the recording JSON file
delay_between_actionsNoDelay between actions in ms
stop_on_errorNo
device_idNoOverride the device ID from the recording

Implementation Reference

  • The core implementation of the replay logic that iterates through recorded actions and executes them.
    export async function replayRecording(
      recording: Recording,
      options: {
        delayBetweenActions?: number;
        stopOnError?: boolean;
        deviceId?: string;
      } = {}
    ): Promise<ReplayResult> {
      const {
        delayBetweenActions = 500,
        stopOnError = false,
        deviceId = recording.deviceId,
      } = options;
    
      const result: ReplayResult = {
        totalActions: recording.actions.length,
        successCount: 0,
        failureCount: 0,
        results: [],
      };
    
      log.info('Replaying recording', {
        id: recording.id,
        name: recording.name,
        actionCount: recording.actions.length,
        deviceId,
      });
    
      for (const action of recording.actions) {
        const start = Date.now();
        let success = true;
        let error: string | undefined;
    
        try {
          await executeAction(action, deviceId);
        } catch (err) {
          success = false;
          error = err instanceof Error ? err.message : String(err);
          log.error('Replay action failed', { actionId: action.id, tool: action.tool, error });
    
          if (stopOnError) {
            result.results.push({
              actionId: action.id,
              tool: action.tool,
              success: false,
              error,
              durationMs: Date.now() - start,
            });
            result.failureCount++;
            break;
          }
        }
    
        result.results.push({
          actionId: action.id,
          tool: action.tool,
          success,
          error,
          durationMs: Date.now() - start,
        });
    
        if (success) result.successCount++;
        else result.failureCount++;
    
        // Delay between actions
        if (delayBetweenActions > 0) {
          await new Promise(resolve => setTimeout(resolve, delayBetweenActions));
        }
      }
    
      log.info('Replay completed', {
        id: recording.id,
        success: result.successCount,
        failures: result.failureCount,
      });
    
      return result;
    }
  • Registration of the 'replay_recording' MCP tool, which parses input and calls the replayRecording handler.
    server.registerTool(
      'replay_recording',
      {
        description: 'Replay a previously recorded action sequence.',
        inputSchema: {
          recording_path: z.string().describe('File path to the recording JSON file'),
          delay_between_actions: z.number().optional().default(500).describe('Delay between actions in ms'),
          stop_on_error: z.boolean().optional().default(false),
          device_id: z.string().optional().describe('Override the device ID from the recording'),
        },
      },
      async ({ recording_path, delay_between_actions, stop_on_error, device_id }) => {
        return await metrics.measure('replay_recording', device_id || 'default', async () => {
          const recording = actionRecorder.loadRecording(recording_path);
          const result = await replayRecording(recording, {
            delayBetweenActions: delay_between_actions,
            stopOnError: stop_on_error,
            deviceId: device_id,
          });
          return {
            content: [{
              type: 'text' as const,
              text: JSON.stringify({
                success: result.failureCount === 0,
                summary: {
                  total: result.totalActions,
                  succeeded: result.successCount,
                  failed: result.failureCount,
                },
                results: result.results,
              }, null, 2),
            }],
          };
        });
      }
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden of behavioral disclosure. It fails to mention that this executes mutations on the device state (side effects), whether it blocks until completion, error handling behavior (despite having a stop_on_error parameter), or idempotency characteristics. 'Replay' implies execution but lacks safety or behavioral specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of six words with no redundancy. Information is front-loaded and dense. While underspecified overall, the brevity itself is structurally optimal for the content provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a complex 4-parameter mutation tool with no output schema and no annotations. Missing critical context: supported recording formats, whether the device must be in a specific state, what determines success/failure, and how this relates to the broader recording workflow implied by sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 75% (3 of 4 params documented). The description adds no parameter-specific context, but meets baseline expectations since coverage is moderately high. However, it does not compensate for the undocumented 'stop_on_error' parameter, which has no schema description and no mention in the text regarding its error-handling semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (replay) and resource (action sequence). The phrase 'previously recorded' effectively distinguishes this from siblings start_recording and stop_recording. However, it omits the context that this executes UI automation on a device, which would strengthen clarity given the tool's ecosystem.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use versus alternatives (e.g., run_test_scenario or individual actions like click_element), nor prerequisites such as requiring an existing recording created via start_recording. The description is purely definitional without operational context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/divineDev-dotcom/android_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server