Skip to main content
Glama

analyze_test_failure

Analyze failed tests with forensic details including logs, screenshots, error classification, and comparison with last passed execution to identify root causes.

Instructions

๐Ÿ” Deep forensic analysis of failed test including logs, screenshots, error classification, and similar failures. ๐Ÿ’ก NEW: Compare with last passed execution to see what changed! ๐Ÿ’ก TIP: Can be auto-invoked from Zebrunner test URLs like: https://workspace.zebrunner.com/projects/PROJECT/automation-launches/LAUNCH_ID/tests/TEST_ID

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
testIdYesTest ID (e.g., 5451420)
testRunIdYesTest Run ID / Launch ID (e.g., 120806)
projectKeyNoProject key (e.g., 'MCP') - alternative to projectId
projectIdNoProject ID - alternative to projectKey
includeScreenshotsNoInclude screenshot links
includeLogsNoInclude log analysis
includeArtifactsNoInclude all test artifacts
includePageSourceNoInclude page source analysis
includeVideoNoInclude video URL
analyzeSimilarFailuresNoFind similar failures in the launch
analyzeScreenshotsWithAINoDownload and analyze screenshots with AI (Claude Vision)
screenshotAnalysisTypeNoScreenshot analysis type: basic (metadata+OCR) or detailed (includes Claude Vision)detailed
formatNoOutput format: detailed, summary, or jira (ready for Jira ticket creation)detailed
compareWithLastPassedNoCompare current failure with last passed execution to identify what changed

Implementation Reference

  • MCP tool registration for 'analyze_test_failure', including tool name, description, Zod input schema inline, and handler that calls reportingHandlers.analyzeTestFailureById(args). Note: schema imported from ./types/api.js but defined inline in registration.
    server.tool(
      "analyze_test_failure",
      "๐Ÿ” Deep forensic analysis of a failed test including logs, screenshots, error classification, and similar failures",
      {
        testId: z.number().int().positive().describe("Test ID (e.g., 5451420)"),
        testRunId: z.number().int().positive().describe("Test Run ID / Launch ID (e.g., 120806)"),
        projectKey: z.string().min(1).optional().describe("Project key (e.g., 'MCP') - alternative to projectId"),
        projectId: z.number().int().positive().optional().describe("Project ID - alternative to projectKey"),
        includeScreenshots: z.boolean().default(true).describe("Include screenshot analysis"),
        includeLogs: z.boolean().default(true).describe("Include log analysis"),
        includeArtifacts: z.boolean().default(true).describe("Include all test artifacts"),
        includePageSource: z.boolean().default(true).describe("Include page source analysis"),
        includeVideo: z.boolean().default(false).describe("Include video URL"),
        analyzeSimilarFailures: z.boolean().default(true).describe("Find similar failures in the launch"),
        format: z.enum(['detailed', 'summary']).default('detailed').describe("Output format: detailed or summary")
      },
      async (args) => reportingHandlers.analyzeTestFailureById(args)
    );
  • Zod input schema definition AnalyzeTestFailureInputSchema used for validating tool parameters like testId, testRunId, analysis options (screenshots, logs, video, similar failures). Imported and used in server registration.
    export const AnalyzeTestFailureInputSchema = z.object({
      testId: z.number().int().positive(),
      testRunId: z.number().int().positive(), // launchId
      projectKey: z.string().min(1).optional(),
      projectId: z.number().int().positive().optional(),
      includeScreenshots: z.boolean().default(true),
      includeLogs: z.boolean().default(true),
      includeArtifacts: z.boolean().default(true),
      includePageSource: z.boolean().default(true),
      includeVideo: z.boolean().default(false),
      analyzeSimilarFailures: z.boolean().default(true),
      format: z.enum(['detailed', 'summary']).default('detailed')
    });
  • Handler function for the tool - delegates execution to ZebrunnerReportingToolHandlers instance's analyzeTestFailureById method.
      async (args) => reportingHandlers.analyzeTestFailureById(args)
    );
  • Instantiation of ZebrunnerReportingToolHandlers with reportingClient, providing the analyzeTestFailureById method (definition not found in codebase).
    const reportingHandlers = new ZebrunnerReportingToolHandlers(reportingClient);
  • ZebrunnerReportingClient instantiation used by reporting handlers for API calls to fetch test data, logs, screenshots needed for failure analysis.
    const reportingClient = new ZebrunnerReportingClient(reportingConfig);
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the tool's capabilities (logs, screenshots, error classification, similar failures, comparison) and auto-invocation from URLs, which adds useful behavioral context. However, it doesn't disclose operational traits like rate limits, authentication needs, or potential side effects (e.g., whether it triggers downloads or external AI calls). The description is informative but lacks full transparency on such behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by new features and a usage tip. Each sentence adds value (e.g., highlighting new comparison capability and auto-invocation). It's efficient with no wasted words, though the emojis and formatting slightly reduce professionalism without harming clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (14 parameters, nested objects, no output schema) and no annotations, the description is moderately complete. It covers the purpose, key features, and a usage hint, but lacks details on output format, error handling, or prerequisites. For a forensic analysis tool with many parameters, more context on expected results or limitations would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 14 parameters thoroughly. The description adds minimal parameter semantics beyond the schemaโ€”it implies analysis includes logs, screenshots, and comparisons, which aligns with parameters like 'includeLogs' and 'compareWithLastPassed', but doesn't provide additional syntax or format details. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Deep forensic analysis of failed test including logs, screenshots, error classification, and similar failures.' It specifies the verb ('analyze') and resource ('failed test'), plus the scope of analysis. It distinguishes from siblings like 'analyze_screenshot' or 'detailed_analyze_launch_failures' by focusing on a single test's forensic details and comparison features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use it: analyzing failed tests with forensic details and comparisons. It explicitly mentions auto-invocation from Zebrunner URLs, which is helpful. However, it doesn't specify when NOT to use it or name alternatives among siblings (e.g., 'detailed_analyze_launch_failures' for broader analysis), so it's not fully explicit about alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/maksimsarychau/mcp-zebrunner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server