Skip to main content
Glama

analyze_test_execution_video

Analyze test execution videos to identify failures, compare with test cases, and determine if issues are bugs or test problems using frame extraction and AI analysis.

Instructions

🎬 Download and analyze test execution video with Claude Vision - extracts frames, compares with test case, and predicts if failure is bug or test issue. NEW: Analysis depth modes (quick/standard/detailed), parallel frame extraction, similar failures search, and historical trends analysis!

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
testIdYesTest ID from Zebrunner
testRunIdYesLaunch ID / Test Run ID
projectKeyNoProject key (MCP, etc.)
projectIdNoProject ID (alternative to projectKey)
extractionModeNoFrame extraction mode: failure_focused (10 frames), smart (20 frames), full_test (30 frames)smart
frameIntervalNoSeconds between frames for full_test mode
failureWindowSecondsNoTime window around failure (seconds)
compareWithTestCaseNoCompare with test case steps
testCaseKeyNoOverride test case key
analysisDepthNoAnalysis depth: quick_text_only (no frames, ~10-20s), standard (8-12 frames for failure+coverage, ~30-60s), detailed (20-30 frames with OCR, ~60-120s)standard
includeOCRNoExtract text from frames using OCR (slow, adds 2-3s per frame)
analyzeSimilarFailuresNoFind similar failures in project (last 30 days, top 10)
includeHistoricalTrendsNoAnalyze test stability and flakiness (last 30 runs)
includeLogCorrelationNoCorrelate frames with log timestamps
formatNoOutput formatdetailed
generateVideoReportNoGenerate timestamped report

Implementation Reference

  • Zod input schema definition for the 'analyze_test_execution_video' MCP tool, including all parameters for video analysis configuration.
    export const AnalyzeTestExecutionVideoInputSchema = z.object({
      testId: z.number().int().positive().describe("Test ID from Zebrunner"),
      testRunId: z.number().int().positive().describe("Launch ID / Test Run ID"),
      projectKey: z.string().min(1).optional().describe("Project key (MCP, etc.)"),
      projectId: z.number().int().positive().optional().describe("Project ID (alternative to projectKey)"),
    
      // Video Analysis Options
      extractionMode: z.enum(['failure_focused', 'full_test', 'smart']).default('smart')
        .describe("Frame extraction mode: failure_focused (10 frames around failure), full_test (30 frames throughout), smart (20 frames at key moments)"),
      frameInterval: z.number().int().positive().default(5)
        .describe("Seconds between frames for full_test mode"),
      failureWindowSeconds: z.number().int().positive().default(30)
        .describe("Time window around failure to analyze (seconds)"),
    
      // Test Case Comparison
      compareWithTestCase: z.boolean().default(true)
        .describe("Compare video execution with test case steps"),
      testCaseKey: z.string().optional()
        .describe("Override test case key if different from test metadata"),
    
      // Analysis Depth
      analysisDepth: z.enum(['quick_text_only', 'standard', 'detailed']).default('standard')
        .describe("Analysis depth: quick_text_only (no frames, ~10-20s), standard (8-12 frames for failure+coverage, ~30-60s), detailed (20-30 frames with OCR, ~60-120s)"),
      includeOCR: z.boolean().default(false)
        .describe("Extract text from frames using OCR (slow, adds 2-3s per frame)"),
      analyzeSimilarFailures: z.boolean().default(true)
        .describe("Find similar failures in project (last 30 days, top 10)"),
      includeHistoricalTrends: z.boolean().default(true)
        .describe("Analyze test stability and flakiness (last 30 runs)"),
      includeLogCorrelation: z.boolean().default(true)
        .describe("Correlate frames with log timestamps"),
    
      // Output Format
      format: z.enum(['detailed', 'summary', 'jira']).default('detailed')
        .describe("Output format: detailed (full analysis), summary (condensed), jira (ticket-ready)"),
      generateVideoReport: z.boolean().default(true)
        .describe("Generate timestamped video analysis report")
    });
  • Main helper function implementing the core video analysis logic: downloads video, extracts frames, parses logs, compares with test cases, analyzes failure, generates predictions and summary.
    async analyzeTestExecutionVideo(params: VideoAnalysisParams): Promise<VideoAnalysisResult> {
      let videoPath: string | undefined;
      
      try {
        if (this.debug) {
          console.log('[VideoAnalyzer] Starting video analysis for test:', params.testId);
        }
    
        // Step 1: Fetch test details and determine project
        const { test, projectId, projectKey } = await this.fetchTestDetails(
          params.testId,
          params.testRunId,
          params.projectKey,
          params.projectId
        );
    
        if (this.debug) {
          console.log(`[VideoAnalyzer] Test: ${test.name}, Project: ${projectKey} (${projectId})`);
        }
    
        // Step 2: Get video URL and download video
        const videoInfo = await this.downloader.getVideoUrlFromTestSessions(
          params.testId,
          params.testRunId,
          projectId
        );
    
        if (!videoInfo) {
          throw new Error('No video found for this test execution');
        }
    
        const downloadResult = await this.downloader.downloadVideo(
          videoInfo.videoUrl,
          params.testId,
          videoInfo.sessionId
        );
    
        if (!downloadResult.success || !downloadResult.localPath) {
          throw new Error(downloadResult.error || 'Failed to download video');
        }
    
        videoPath = downloadResult.localPath;
    
        if (this.debug) {
          console.log(`[VideoAnalyzer] Video downloaded: ${downloadResult.duration}s, ${downloadResult.resolution}`);
        }
    
        // Step 3: Determine frame extraction strategy based on video duration
        // Note: Video length ≠ test execution time (video may start late, end early, have gaps)
        // So we extract frames throughout the video + extra at the end (where failures typically occur)
        
        let frames: any[] = [];
        
        // Determine frame extraction settings based on analysis depth
        const { shouldExtractFrames, extractionMode, includeOCR, minFrames, maxFrames } = this.getFrameExtractionSettings(params.analysisDepth);
        
        let frameExtractionError: string | undefined;
        
        if (shouldExtractFrames) {
          try {
            const videoDuration = downloadResult.duration || 0;
            
            if (this.debug) {
              console.error(`[VideoAnalyzer] Starting frame extraction: mode=smart_distributed, minFrames=${minFrames}, maxFrames=${maxFrames}, duration=${videoDuration}s`);
              console.error(`[VideoAnalyzer] Note: Extracting frames throughout video + extra frames in last 30s (where failures typically occur)`);
            }
            
            // Use test timing as hints only (not for exact frame timestamps)
            const testDurationHint = this.calculateTestDurationHint(test);
            
            frames = await this.extractor.extractFrames(
              videoPath,
              videoDuration,
              'smart', // Always use smart mode (distributed + end-focused)
              undefined, // Don't pass calculated failure timestamp - let it use end of video
              30, // Always extract extra frames in last 30 seconds
              params.frameInterval,
              includeOCR || params.includeOCR  // Allow manual override
            );
    
            // Limit frames based on analysis depth (max)
            if (frames.length > maxFrames) {
              if (this.debug) {
                console.error(`[VideoAnalyzer] Limiting frames from ${frames.length} to ${maxFrames} for ${params.analysisDepth} mode`);
              }
              frames = frames.slice(0, maxFrames);
            }
    
            if (this.debug) {
              console.error(`[VideoAnalyzer] ✅ Extracted ${frames.length} frames (${params.analysisDepth} mode)`);
            }
    
            // Enforce minimum frames for visual analysis
            if (frames.length < minFrames && minFrames > 0) {
              frameExtractionError = `Frame extraction produced only ${frames.length} frames (minimum required: ${minFrames}). Possible causes: video too short, extraction failed, or FFmpeg issues.`;
              console.error(`[VideoAnalyzer] ⚠️  ${frameExtractionError}`);
            } else if (frames.length === 0) {
              frameExtractionError = `Frame extraction completed but produced 0 frames. Possible causes: video too short, invalid timestamps, or FFmpeg extraction issues.`;
              console.error(`[VideoAnalyzer] ⚠️  ${frameExtractionError}`);
            }
          } catch (frameError: any) {
            frameExtractionError = `Frame extraction failed: ${frameError.message || frameError}`;
            console.error(`[VideoAnalyzer] ❌ ${frameExtractionError}`);
            console.error(`[VideoAnalyzer] Continuing with text-only analysis (logs, test case comparison, predictions)`);
            frames = []; // Continue with empty frames array
          }
        } else {
          if (this.debug) {
            console.error(`[VideoAnalyzer] Skipping frame extraction (${params.analysisDepth} mode)`);
          }
          frameExtractionError = `Frame extraction skipped (analysisDepth: ${params.analysisDepth})`;
        }
    
        // Step 4: Fetch logs and parse execution steps
        const logsResponse = await this.reportingClient.getTestLogsAndScreenshots(params.testRunId, params.testId, { maxPageSize: 1000 });
        const logItems = logsResponse.items.filter(item => item.kind === 'log');
        const logSteps = this.parseLogsToSteps(logItems);
    
        if (this.debug) {
          console.log(`[VideoAnalyzer] Parsed ${logSteps.length} log steps`);
        }
    
        // Step 5: Analyze failure
        // Assume failure is near the end of video (most common case)
        // Use last 30 seconds as the failure window for frame correlation
        const estimatedFailureTimestamp = Math.max(0, (downloadResult.duration || 0) - 15); // 15s before end
        const failureAnalysis = this.analyzeFailure(test, logItems, frames, estimatedFailureTimestamp);
    
        // Step 6: Compare with test cases (if enabled) WITH VISUAL VERIFICATION
        // NEW: Support for MULTIPLE test cases!
        let testCaseComparison = null;
        let multiTestCaseComparison = null;
        
        if (params.compareWithTestCase && this.comparator && test.testCases && test.testCases.length > 0) {
          // Collect ALL test case keys (not just first one!)
          const testCaseKeys: string[] = [];
          
          if (params.testCaseKey) {
            // User provided specific test case key
            testCaseKeys.push(params.testCaseKey);
          } else {
            // Use all test cases assigned to test
            for (const tc of test.testCases) {
              if (tc.testCaseId) {
                testCaseKeys.push(tc.testCaseId);
              }
            }
          }
          
          if (testCaseKeys.length > 0) {
            if (this.debug) {
              console.log(`[VideoAnalyzer] Found ${testCaseKeys.length} test case(s): ${testCaseKeys.join(', ')}`);
            }
            
            if (testCaseKeys.length === 1) {
              // Single test case - use legacy comparison
              if (this.debug) {
                console.log(`[VideoAnalyzer] Starting single test case comparison with visual verification (${frames.length} frames)`);
              }
              
              testCaseComparison = await this.comparator.compareWithTestCase(
                testCaseKeys[0],
                projectKey,
                logSteps,
                frames
              );
            } else {
              // Multiple test cases - use NEW multi-TC comparison
              if (this.debug) {
                console.log(`[VideoAnalyzer] Starting MULTI test case comparison with visual verification (${frames.length} frames)`);
              }
              
              const baseUrl = this.reportingClient['config'].baseUrl;
              multiTestCaseComparison = await this.comparator.compareWithMultipleTestCases(
                testCaseKeys,
                projectKey,
                logSteps,
                frames,
                baseUrl  // Pass baseUrl for building clickable TC URLs
              );
            }
          }
        }
    
        // Step 7: Generate prediction
        const prediction = this.predictor.predictIssueType(
          failureAnalysis,
          testCaseComparison,
          frames,
          JSON.stringify(logItems)
        );
    
        // Step 8: Build video metadata
        const videoMetadata: VideoMetadata = {
          videoUrl: videoInfo.videoUrl,
          sessionId: videoInfo.sessionId,
          sessionStart: videoInfo.sessionStart,
          sessionEnd: videoInfo.sessionEnd,
          videoDuration: downloadResult.duration || 0,
          extractedFrames: frames.length,
          videoResolution: downloadResult.resolution || 'unknown',
          downloadSuccess: true,
          localVideoPath: videoPath,
          platformName: videoInfo.platformName,
          deviceName: videoInfo.deviceName,
          status: videoInfo.status,
          frameExtractionError
        };
    
        // Step 9: Build execution flow
        const executionFlow = {
          stepsFromLogs: logSteps,
          stepsFromVideo: frames.map((f, idx) => ({
            stepNumber: idx + 1,
            timestamp: f.timestamp,
            inferredAction: f.visualAnalysis || 'Frame analysis pending',
            screenTransition: f.appState || 'Unknown',
            confidence: 'medium' as const
          })),
          correlatedSteps: logSteps.map((logStep, idx) => ({
            logStep: idx + 1,
            videoTimestamp: this.findClosestFrameTimestamp(logStep.timestamp, frames),
            match: true,
            discrepancy: undefined
          }))
        };
    
        // Step 10: Build links
        const baseUrl = this.reportingClient['config'].baseUrl;
        const links: AnalysisLinks = {
          videoUrl: videoInfo.videoUrl,
          testUrl: `${baseUrl}/tests/runs/${params.testRunId}/results/${params.testId}`,
          testCaseUrl: testCaseComparison 
            ? `${baseUrl}/tests/cases/${testCaseComparison.testCaseKey}`
            : undefined
        };
    
        // Step 11: Generate summary
        const summary = this.generateSummary(
          test,
          videoMetadata,
          prediction,
          testCaseComparison
        );
    
        // Step 12: Cleanup
        if (videoPath) {
          this.downloader.cleanupVideo(videoPath);
        }
        this.extractor.cleanupFrames(frames);
    
        if (this.debug) {
          console.log('[VideoAnalyzer] Analysis complete!');
        }
    
        return {
          videoMetadata,
          frames,
          executionFlow,
          testCaseComparison: testCaseComparison || undefined,  // Legacy: single test case
          multiTestCaseComparison: multiTestCaseComparison || undefined,  // NEW: multiple test cases
          failureAnalysis,
          prediction,
          summary,
          links
        };
    
      } catch (error) {
        if (this.debug) {
          console.error('[VideoAnalyzer] Analysis failed:', error);
        }
    
        // Cleanup on error
        if (videoPath) {
          this.downloader.cleanupVideo(videoPath);
        }
    
        throw new Error(`Video analysis failed: ${error instanceof Error ? error.message : error}`);
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it mentions performance characteristics ('quick_text_only (no frames, ~10-20s)'), processing details ('parallel frame extraction'), and additional capabilities ('similar failures search, historical trends analysis'). However, it doesn't mention authentication requirements, rate limits, or error handling scenarios.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. The feature list in the second sentence is somewhat dense but relevant. Every sentence earns its place by conveying important capabilities, though the exclamation point and 'NEW' tag could be considered slightly promotional rather than purely informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (16 parameters, no output schema, no annotations), the description does a good job covering the tool's scope and capabilities. It explains what the tool does, mentions analysis modes, and highlights key features. However, for such a complex tool, it could benefit from more guidance on output format or result interpretation since there's no output schema provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description doesn't add meaningful parameter semantics beyond what's already in the schema - it mentions analysis depth modes and parallel frame extraction but doesn't explain parameter interactions or provide usage examples. The schema already thoroughly documents all 16 parameters with descriptions and defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Download and analyze test execution video with Claude Vision - extracts frames, compares with test case, and predicts if failure is bug or test issue.' It uses specific verbs (download, analyze, extracts, compares, predicts) and distinguishes from sibling tools like analyze_screenshot or analyze_test_failure by focusing specifically on video analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through its feature list ('NEW: Analysis depth modes...') but doesn't explicitly state when to use this tool versus alternatives like analyze_screenshot or analyze_test_failure. It suggests video analysis is appropriate but doesn't provide guidance on prerequisites or when other tools might be better suited.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/maksimsarychau/mcp-zebrunner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server