Skip to main content
Glama

analyze_test_execution_video

Analyze test execution videos to identify failures, compare with test cases, and determine if issues are bugs or test problems using frame extraction and AI analysis.

Instructions

๐ŸŽฌ Download and analyze test execution video with Claude Vision - extracts frames, compares with test case, and predicts if failure is bug or test issue. NEW: Analysis depth modes (quick/standard/detailed), parallel frame extraction, similar failures search, and historical trends analysis!

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
testIdYesTest ID from Zebrunner
testRunIdYesLaunch ID / Test Run ID
projectKeyNoProject key (MCP, etc.)
projectIdNoProject ID (alternative to projectKey)
extractionModeNoFrame extraction mode: failure_focused (10 frames), smart (20 frames), full_test (30 frames)smart
frameIntervalNoSeconds between frames for full_test mode
failureWindowSecondsNoTime window around failure (seconds)
compareWithTestCaseNoCompare with test case steps
testCaseKeyNoOverride test case key
analysisDepthNoAnalysis depth: quick_text_only (no frames, ~10-20s), standard (8-12 frames for failure+coverage, ~30-60s), detailed (20-30 frames with OCR, ~60-120s)standard
includeOCRNoExtract text from frames using OCR (slow, adds 2-3s per frame)
analyzeSimilarFailuresNoFind similar failures in project (last 30 days, top 10)
includeHistoricalTrendsNoAnalyze test stability and flakiness (last 30 runs)
includeLogCorrelationNoCorrelate frames with log timestamps
formatNoOutput formatdetailed
generateVideoReportNoGenerate timestamped report

Implementation Reference

  • Zod input schema definition for the 'analyze_test_execution_video' MCP tool, including all parameters for video analysis configuration.
    export const AnalyzeTestExecutionVideoInputSchema = z.object({ testId: z.number().int().positive().describe("Test ID from Zebrunner"), testRunId: z.number().int().positive().describe("Launch ID / Test Run ID"), projectKey: z.string().min(1).optional().describe("Project key (MCP, etc.)"), projectId: z.number().int().positive().optional().describe("Project ID (alternative to projectKey)"), // Video Analysis Options extractionMode: z.enum(['failure_focused', 'full_test', 'smart']).default('smart') .describe("Frame extraction mode: failure_focused (10 frames around failure), full_test (30 frames throughout), smart (20 frames at key moments)"), frameInterval: z.number().int().positive().default(5) .describe("Seconds between frames for full_test mode"), failureWindowSeconds: z.number().int().positive().default(30) .describe("Time window around failure to analyze (seconds)"), // Test Case Comparison compareWithTestCase: z.boolean().default(true) .describe("Compare video execution with test case steps"), testCaseKey: z.string().optional() .describe("Override test case key if different from test metadata"), // Analysis Depth analysisDepth: z.enum(['quick_text_only', 'standard', 'detailed']).default('standard') .describe("Analysis depth: quick_text_only (no frames, ~10-20s), standard (8-12 frames for failure+coverage, ~30-60s), detailed (20-30 frames with OCR, ~60-120s)"), includeOCR: z.boolean().default(false) .describe("Extract text from frames using OCR (slow, adds 2-3s per frame)"), analyzeSimilarFailures: z.boolean().default(true) .describe("Find similar failures in project (last 30 days, top 10)"), includeHistoricalTrends: z.boolean().default(true) .describe("Analyze test stability and flakiness (last 30 runs)"), includeLogCorrelation: z.boolean().default(true) .describe("Correlate frames with log timestamps"), // Output Format format: z.enum(['detailed', 'summary', 'jira']).default('detailed') .describe("Output format: detailed (full analysis), summary (condensed), jira (ticket-ready)"), generateVideoReport: z.boolean().default(true) .describe("Generate timestamped video analysis report") });
  • Main helper function implementing the core video analysis logic: downloads video, extracts frames, parses logs, compares with test cases, analyzes failure, generates predictions and summary.
    async analyzeTestExecutionVideo(params: VideoAnalysisParams): Promise<VideoAnalysisResult> { let videoPath: string | undefined; try { if (this.debug) { console.log('[VideoAnalyzer] Starting video analysis for test:', params.testId); } // Step 1: Fetch test details and determine project const { test, projectId, projectKey } = await this.fetchTestDetails( params.testId, params.testRunId, params.projectKey, params.projectId ); if (this.debug) { console.log(`[VideoAnalyzer] Test: ${test.name}, Project: ${projectKey} (${projectId})`); } // Step 2: Get video URL and download video const videoInfo = await this.downloader.getVideoUrlFromTestSessions( params.testId, params.testRunId, projectId ); if (!videoInfo) { throw new Error('No video found for this test execution'); } const downloadResult = await this.downloader.downloadVideo( videoInfo.videoUrl, params.testId, videoInfo.sessionId ); if (!downloadResult.success || !downloadResult.localPath) { throw new Error(downloadResult.error || 'Failed to download video'); } videoPath = downloadResult.localPath; if (this.debug) { console.log(`[VideoAnalyzer] Video downloaded: ${downloadResult.duration}s, ${downloadResult.resolution}`); } // Step 3: Determine frame extraction strategy based on video duration // Note: Video length โ‰  test execution time (video may start late, end early, have gaps) // So we extract frames throughout the video + extra at the end (where failures typically occur) let frames: any[] = []; // Determine frame extraction settings based on analysis depth const { shouldExtractFrames, extractionMode, includeOCR, minFrames, maxFrames } = this.getFrameExtractionSettings(params.analysisDepth); let frameExtractionError: string | undefined; if (shouldExtractFrames) { try { const videoDuration = downloadResult.duration || 0; if (this.debug) { console.error(`[VideoAnalyzer] Starting frame extraction: mode=smart_distributed, minFrames=${minFrames}, maxFrames=${maxFrames}, duration=${videoDuration}s`); console.error(`[VideoAnalyzer] Note: Extracting frames throughout video + extra frames in last 30s (where failures typically occur)`); } // Use test timing as hints only (not for exact frame timestamps) const testDurationHint = this.calculateTestDurationHint(test); frames = await this.extractor.extractFrames( videoPath, videoDuration, 'smart', // Always use smart mode (distributed + end-focused) undefined, // Don't pass calculated failure timestamp - let it use end of video 30, // Always extract extra frames in last 30 seconds params.frameInterval, includeOCR || params.includeOCR // Allow manual override ); // Limit frames based on analysis depth (max) if (frames.length > maxFrames) { if (this.debug) { console.error(`[VideoAnalyzer] Limiting frames from ${frames.length} to ${maxFrames} for ${params.analysisDepth} mode`); } frames = frames.slice(0, maxFrames); } if (this.debug) { console.error(`[VideoAnalyzer] โœ… Extracted ${frames.length} frames (${params.analysisDepth} mode)`); } // Enforce minimum frames for visual analysis if (frames.length < minFrames && minFrames > 0) { frameExtractionError = `Frame extraction produced only ${frames.length} frames (minimum required: ${minFrames}). Possible causes: video too short, extraction failed, or FFmpeg issues.`; console.error(`[VideoAnalyzer] โš ๏ธ ${frameExtractionError}`); } else if (frames.length === 0) { frameExtractionError = `Frame extraction completed but produced 0 frames. Possible causes: video too short, invalid timestamps, or FFmpeg extraction issues.`; console.error(`[VideoAnalyzer] โš ๏ธ ${frameExtractionError}`); } } catch (frameError: any) { frameExtractionError = `Frame extraction failed: ${frameError.message || frameError}`; console.error(`[VideoAnalyzer] โŒ ${frameExtractionError}`); console.error(`[VideoAnalyzer] Continuing with text-only analysis (logs, test case comparison, predictions)`); frames = []; // Continue with empty frames array } } else { if (this.debug) { console.error(`[VideoAnalyzer] Skipping frame extraction (${params.analysisDepth} mode)`); } frameExtractionError = `Frame extraction skipped (analysisDepth: ${params.analysisDepth})`; } // Step 4: Fetch logs and parse execution steps const logsResponse = await this.reportingClient.getTestLogsAndScreenshots(params.testRunId, params.testId, { maxPageSize: 1000 }); const logItems = logsResponse.items.filter(item => item.kind === 'log'); const logSteps = this.parseLogsToSteps(logItems); if (this.debug) { console.log(`[VideoAnalyzer] Parsed ${logSteps.length} log steps`); } // Step 5: Analyze failure // Assume failure is near the end of video (most common case) // Use last 30 seconds as the failure window for frame correlation const estimatedFailureTimestamp = Math.max(0, (downloadResult.duration || 0) - 15); // 15s before end const failureAnalysis = this.analyzeFailure(test, logItems, frames, estimatedFailureTimestamp); // Step 6: Compare with test cases (if enabled) WITH VISUAL VERIFICATION // NEW: Support for MULTIPLE test cases! let testCaseComparison = null; let multiTestCaseComparison = null; if (params.compareWithTestCase && this.comparator && test.testCases && test.testCases.length > 0) { // Collect ALL test case keys (not just first one!) const testCaseKeys: string[] = []; if (params.testCaseKey) { // User provided specific test case key testCaseKeys.push(params.testCaseKey); } else { // Use all test cases assigned to test for (const tc of test.testCases) { if (tc.testCaseId) { testCaseKeys.push(tc.testCaseId); } } } if (testCaseKeys.length > 0) { if (this.debug) { console.log(`[VideoAnalyzer] Found ${testCaseKeys.length} test case(s): ${testCaseKeys.join(', ')}`); } if (testCaseKeys.length === 1) { // Single test case - use legacy comparison if (this.debug) { console.log(`[VideoAnalyzer] Starting single test case comparison with visual verification (${frames.length} frames)`); } testCaseComparison = await this.comparator.compareWithTestCase( testCaseKeys[0], projectKey, logSteps, frames ); } else { // Multiple test cases - use NEW multi-TC comparison if (this.debug) { console.log(`[VideoAnalyzer] Starting MULTI test case comparison with visual verification (${frames.length} frames)`); } const baseUrl = this.reportingClient['config'].baseUrl; multiTestCaseComparison = await this.comparator.compareWithMultipleTestCases( testCaseKeys, projectKey, logSteps, frames, baseUrl // Pass baseUrl for building clickable TC URLs ); } } } // Step 7: Generate prediction const prediction = this.predictor.predictIssueType( failureAnalysis, testCaseComparison, frames, JSON.stringify(logItems) ); // Step 8: Build video metadata const videoMetadata: VideoMetadata = { videoUrl: videoInfo.videoUrl, sessionId: videoInfo.sessionId, sessionStart: videoInfo.sessionStart, sessionEnd: videoInfo.sessionEnd, videoDuration: downloadResult.duration || 0, extractedFrames: frames.length, videoResolution: downloadResult.resolution || 'unknown', downloadSuccess: true, localVideoPath: videoPath, platformName: videoInfo.platformName, deviceName: videoInfo.deviceName, status: videoInfo.status, frameExtractionError }; // Step 9: Build execution flow const executionFlow = { stepsFromLogs: logSteps, stepsFromVideo: frames.map((f, idx) => ({ stepNumber: idx + 1, timestamp: f.timestamp, inferredAction: f.visualAnalysis || 'Frame analysis pending', screenTransition: f.appState || 'Unknown', confidence: 'medium' as const })), correlatedSteps: logSteps.map((logStep, idx) => ({ logStep: idx + 1, videoTimestamp: this.findClosestFrameTimestamp(logStep.timestamp, frames), match: true, discrepancy: undefined })) }; // Step 10: Build links const baseUrl = this.reportingClient['config'].baseUrl; const links: AnalysisLinks = { videoUrl: videoInfo.videoUrl, testUrl: `${baseUrl}/tests/runs/${params.testRunId}/results/${params.testId}`, testCaseUrl: testCaseComparison ? `${baseUrl}/tests/cases/${testCaseComparison.testCaseKey}` : undefined }; // Step 11: Generate summary const summary = this.generateSummary( test, videoMetadata, prediction, testCaseComparison ); // Step 12: Cleanup if (videoPath) { this.downloader.cleanupVideo(videoPath); } this.extractor.cleanupFrames(frames); if (this.debug) { console.log('[VideoAnalyzer] Analysis complete!'); } return { videoMetadata, frames, executionFlow, testCaseComparison: testCaseComparison || undefined, // Legacy: single test case multiTestCaseComparison: multiTestCaseComparison || undefined, // NEW: multiple test cases failureAnalysis, prediction, summary, links }; } catch (error) { if (this.debug) { console.error('[VideoAnalyzer] Analysis failed:', error); } // Cleanup on error if (videoPath) { this.downloader.cleanupVideo(videoPath); } throw new Error(`Video analysis failed: ${error instanceof Error ? error.message : error}`); } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/maksimsarychau/mcp-zebrunner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server