The Zebrunner MCP Server integrates with Zebrunner Test Case Management to enable QA teams, developers, and managers to comprehensively manage, analyze, and improve test cases through natural language commands via AI assistants.
Core Capabilities:
Test Case Management: Retrieve detailed test case information by key, title, or advanced filters (automation state, priority, dates), with support for batch operations and comprehensive pagination
Test Suite Organization: Navigate hierarchical test suite structures, get tree views with configurable depth, and identify root suites and subsuites
Quality Validation & Improvement: Validate test cases against 100+ checkpoints using a 3-tier intelligent rules system, receive AI-powered improvement suggestions, and apply automated fixes
Test Coverage & Code Generation: Analyze test case coverage against actual implementations, generate draft test code for multiple frameworks (Java/Carina, JavaScript/Jest, Python/Pytest), and perform enhanced rules-based coverage analysis
Duplicate Analysis: Identify similar test cases using both step-based similarity and advanced LLM-powered semantic analysis with detailed similarity matrices
Launch & Execution Management: Access comprehensive launch details, summaries, and test execution results with filtering by milestone, build number, or launch name
Reporting & Analytics: Platform-specific test results by time period, top bug analysis with issue links, project milestone tracking, and completion status
Configuration & Customization: Customizable rules system via Markdown files, multiple output formats (JSON, markdown, string, DTO), and clickable links to Zebrunner web UI
The server supports framework detection, intelligent validation, semantic analysis with LLM-powered clustering, and provides comprehensive test run management capabilities.
Supports Android platform test execution and results analysis through Zebrunner's test management system
Supports iOS platform test execution and results analysis through Zebrunner's test management system
Generates automated test code in JavaScript with frameworks like Jest based on test cases stored in Zebrunner
Generates Jest test code from Zebrunner test cases with intelligent framework detection and coverage analysis
Generates rich formatted reports and documentation from Zebrunner test data in Markdown format
Creates visual diagrams showing test suite hierarchies and test case relationships from Zebrunner data
Generates Python pytest code from Zebrunner test cases with automated framework detection
Generates automated test code in Python based on test cases and test execution data from Zebrunner
Generates Selenium WebDriver test automation code from Zebrunner test cases with coverage analysis
Zebrunner MCP Server
A Model Context Protocol (MCP) server that integrates with Zebrunner Test Case Management to help QA teams manage test cases, test suites, and test execution data through AI assistants like Claude.
π Need help with installation? Check out our Step-by-Step Install Guide for detailed setup instructions.
π Installing via npm? See our MCP NPM Installation Guide for Claude Desktop, Cursor, IntelliJ IDEA, and ChatGPT Desktop configuration.
π Table of Contents
π― What is this tool?
This tool allows you to:
Retrieve test cases and test suites from Zebrunner
Analyze test coverage and generate test code
Get test execution results and launch details
Validate test case quality with automated checks using intelligent rules
Generate reports and insights from your test data
Improve test cases with AI-powered suggestions and automated fixes
All through natural language commands in AI assistants!
π§ Intelligent Rules System
What Makes This Tool Special
Our MCP server includes a sophisticated 3-tier rules system that transforms how you work with test cases:
π― Test Case Review Rules (test_case_review_rules.md)
Purpose: Core quality standards and writing guidelines
What it does: Defines fundamental principles for writing high-quality test cases
Key areas: Independence, single responsibility, comprehensive preconditions, complete step coverage
Used by:
validate_test_caseandimprove_test_casetools
β
Test Case Analysis Checkpoints (test_case_analysis_checkpoints.md)
Purpose: Detailed validation checklist with 100+ checkpoints
What it does: Provides granular validation criteria for thorough test case analysis
Key areas: Structure validation, automation readiness, platform considerations, quality assurance
Used by:
validate_test_casefor comprehensive scoring and issue detection
βοΈ MCP Zebrunner Rules (mcp-zebrunner-rules.md)
Purpose: Technical configuration for test generation and coverage analysis
What it does: Defines framework detection patterns, code templates, and coverage thresholds
Key areas: Framework detection, test generation templates, coverage thresholds, quality standards
Used by:
generate_draft_test_by_keyandget_enhanced_test_coverage_with_rulestools
How the Rules Work Together
Why This Matters
Consistency: All team members follow the same quality standards
Automation: Reduce manual review time with automated validation
Learning: New team members learn best practices through AI feedback
Customization: Adapt rules to your project's specific needs
Continuous Improvement: AI suggests improvements based on proven patterns
Customizing Rules for Your Project
You can customize any of the three rules files:
Example customizations:
Mobile projects: Add mobile-specific validation rules
API projects: Focus on API testing patterns and data validation
Different frameworks: Customize code generation templates
Company standards: Align with your organization's testing guidelines
π Prerequisites
What you need to know
Basic command line usage (opening terminal, running commands)
Your Zebrunner credentials (login and API token)
Basic understanding of test management (test cases, test suites)
Software requirements
Node.js 18 or newer - Download here
npm (comes with Node.js)
Access to a Zebrunner instance with API credentials
How to check if you have Node.js
Open your terminal/command prompt and run:
If you see version numbers, you're ready to go!
π Quick Start Guide
π‘ Want more detailed instructions? Check out our More Detailed Step-by-step Install Guide with troubleshooting tips and platform-specific instructions.
Step 1: Get the code
Choose one of these methods:
Option A: Clone from repository (recommended)
Option B: Download and extract
Download the project files and extract them to a folder.
Step 2: Install dependencies
Step 3: Configure your Zebrunner connection
Create a .env file in the project folder with your Zebrunner details:
How to get your Zebrunner API token:
Log into your Zebrunner instance
Go to your profile settings
Find the "API Access" section
Generate a new API token
Copy the token to your
.envfile
Step 4: Build the project
Step 5: Test your connection
If you see "β Health check completed", you're ready to go!
π Updating to New Version
Check current version
Update steps
Important Notes:
β Your for the health check to work
β Restart Claude Desktop/Code after updating to reload the MCP server
β Check release notes for any breaking changes before updating
If the health check fails, verify your .env configuration and Zebrunner credentials.
π§ Usage Methods
Method 1: Use with Claude Desktop/Code (Recommended)
Add this configuration to your Claude Desktop or Claude Code settings. Important: You must use the full absolute path to your project folder.
Example paths:
Windows:
C:\\Users\\YourName\\Projects\\mcp-zebrunner\\dist\\server.jsmacOS/Linux:
/Users/YourName/Projects/mcp-zebrunner/dist/server.js
Alternative: Command Line Integration (Claude Code)
You can also add the server using the command line:
Important: Replace /full/absolute/path/to/mcp-zebrunner/ with the actual full path to your project folder.
Method 2: Run as standalone server
Development mode (with auto-reload)
Production mode
Method 3: Smart URL-Based Analysis π
NEW in v5.4.1+: Claude can automatically detect Zebrunner URLs and analyze them with optimal settings!
Just paste a Zebrunner URL in your conversation, and Claude will automatically:
Parse the URL to extract project, launch, and test IDs
Call the appropriate analysis tool
Use recommended settings (videos, screenshots, AI analysis enabled)
π Supported URL Patterns
1. Test Analysis URLs
What happens:
Claude automatically calls
analyze_test_failureExtracts:
projectKey,testRunId(launch ID),testIdEnables:
includeVideo: true,analyzeScreenshotsWithAI: true, all diagnostics
Example:
2. Launch Analysis URLs
What happens:
Claude automatically calls
detailed_analyze_launch_failuresExtracts:
projectKey,testRunId(launch ID)Enables:
includeScreenshotAnalysis: true, comprehensive analysis
Example:
β¨ Advanced Usage
Override Default Settings
Claude understands natural language overrides:
Multiple URLs
Analyze multiple tests/launches in one request:
Cross-Workspace Support
β οΈ URLs from different workspaces will show a warning but still attempt analysis:
π URL Pattern Reference
Component | Example | Extracted As | Used In Tool |
Workspace |
| Validation only | N/A |
Project Key |
|
| All tools |
Launch ID |
|
| All tools |
Test ID |
|
|
only |
π― Why Use URL-Based Analysis?
β
Faster: No need to manually specify IDs
β
Convenient: Copy-paste URLs directly from Zebrunner UI
β
Optimized: Automatic use of recommended settings
β
Smart: Claude detects intent and adjusts parameters
β
Flexible: Natural language overrides work seamlessly
π‘ Pro Tips
Direct from Zebrunner: Copy URL directly from your browser while viewing a test/launch
Batch Analysis: Paste multiple URLs separated by newlines
Custom Settings: Add natural language instructions to override defaults
Quick Checks: URLs work great for quick "what happened here?" questions
Reports: Combine with format requests: "Generate JIRA ticket for https://...url..."
π οΈ Available Tools
Once connected, you can use these tools through natural language in your AI assistant. Here's a comprehensive reference of all 33+ available tools organized by category:
π Test Case Management
Core Test Case Tools
Tool | Description | Example Usage | Best For |
| Get detailed test case information |
| All roles |
| Advanced filtering with automation states, dates |
| QA, SDETs |
| Filter by specific automation states |
| SDETs, Managers |
| Search test cases by title (partial match) |
| All roles |
| Advanced filtering by suite, dates, priority, automation state |
| QA, Managers |
| List available automation states |
| All roles |
| List available priorities with IDs |
| All roles |
Batch Test Case Operations
Tool | Description | Example Usage | Best For |
| Get ALL test cases (handles pagination) |
| Managers, Leads |
| All test cases with hierarchy info |
| Analysts |
π³ Test Suite Hierarchy & Organization
Suite Management
Tool | Description | Example Usage | Best For |
| List suites with pagination |
| All roles |
| Hierarchical tree view |
| Managers, QA |
| Get top-level suites |
| Managers |
| Get all child suites |
| QA, Analysts |
Suite Analysis Tools
Tool | Description | Example Usage | Best For |
| Find specific suite by ID |
| All roles |
| Comprehensive suite listing |
| Managers |
| Find root suite for any suite |
| Analysts |
π Test Coverage & Analysis
Coverage Analysis
Tool | Description | Example Usage | Best For |
| Analyze implementation coverage |
| Developers, SDETs |
| Rules-based coverage analysis |
| SDETs, Leads |
Duplicate Analysis
Tool | Description | Example Usage | Best For |
| Find and group similar test cases by step similarity |
| QA Managers, SDETs |
| Advanced semantic analysis with LLM-powered step clustering |
| Senior QA, Test Architects |
π Clickable Links Feature: Both duplicate analysis tools support clickable links to Zebrunner web UI:
Add
include_clickable_links: trueto make test case keys clickable in markdown outputJSON/DTO formats automatically include
webUrlfields when enabledLinks are generated from your
ZEBRUNNER_URLenvironment variableExample:
"Analyze suite 17585 for duplicates with clickable links enabled"
π§ͺ Test Code Generation & Validation
AI-Powered Tools
Tool | Description | Example Usage | Best For |
| Generate test code with framework detection |
| SDETs, Developers |
| Quality validation with improvement |
| QA, Managers |
| Dedicated improvement tool |
| QA, SDETs |
π Launch & Execution Management
Launch Operations β Essential for Managers
Tool | Description | Example Usage | Best For |
| Comprehensive launch information |
| Managers, Leads |
| Quick launch overview |
| Managers |
| All launches with pagination |
| Managers, Leads |
| Filter by milestone/build |
| Managers, Leads |
π Reporting & Analytics
Test Failure Analysis π Game Changer
Tool | Description | Example Usage | Best For |
| Deep forensic analysis of failed tests with logs, screenshots, error classification, and recommendations. NEW in v5.11.0: Compare with last passed execution! Shows what changed (logs, duration, environment). Also:
generates ready-to-paste Jira tickets with auto-priority, labels, and clickable π₯ video links! |
| QA Engineers, SDETs, Managers |
| π NEW in v5.11.0! Track test execution trends across launches. View pass/fail history, find last passed execution, calculate pass rate. Critical Detection: Highlights when test failed in all recent runs! |
or
| QA Engineers, SDETs, Managers |
| π Enhanced v4.12.1 Analyze failures WITHOUT linked issues with Claude-level intelligence + Jira format support . Auto-deep-dive with executive summary, timeline, patterns, priorities. NEW: Generate Jira-ready tickets for entire launches! |
or with
| QA Managers, SDETs, Team Leads |
β FIXED in v5.2.4! Improved Reliability & Video Links
π₯ Video URLs fixed: Now uses test-sessions API (
/api/reporting/v1/launches/{id}/test-sessions) for reliable video artifact extractionπ‘οΈ Comprehensive error handling: Gracefully handles missing screenshots/logs (returns empty arrays instead of throwing)
π« No more "no result received" errors: All API calls have proper try-catch blocks with fallbacks
π Better debugging: Enhanced logging when
debug: trueis enabled in configπ Schema updates: Supports both old and new API structures for backward compatibility
π« NEW in v4.12.1! Jira-Ready Ticket Format
Use
format: 'jira'to generate ready-to-paste Jira ticketsAuto-calculated priority based on stability and impact
Smart labels:
test-automation,locator-issue,flaky-test, etc.Complete Jira markup: Tables, panels, code blocks, clickable links
π₯ Prominent video links: Beautiful panels + links section
Copy-paste ready: No manual formatting needed
Saves 5-10 minutes per ticket with consistent quality
π§ Enhanced in v4.11.1!
detailed_analyze_launch_failuresprovides automatic deep synthesis like Claude would manually provide:
π― Executive Summary: Key findings, patterns, and stability indicators
π Timeline Analysis: When failures first appeared, progression tracking
π¬ Pattern Analysis: Groups by root cause with affected tests and stability %
π― Priority-Based Recommendations: π΄ HIGH / π‘ MEDIUM / π’ LOW with impact analysis
π Enhanced Test Details: Full error messages, stack traces, timestamps
β Smart Follow-up Questions: Guides next investigation steps
Smart filtering: Analyzes only tests WITHOUT linked issues by default
Optional AI screenshot analysis for all tests
No manual follow-up needed - get complete picture in one call!
π NEW in v5.11.0! Test Execution History & Comparison
π Track execution trends: View pass/fail history across launches with
get_test_execution_historyπ Compare with last passed: New
compareWithLastPassedparameter inanalyze_test_failure
Compare logs (new errors detection)
Compare duration (performance regression)
Compare environment (device/platform changes)
Compare screenshots (visual differences)
β οΈ Critical detection: Automatically highlights when test failed in all recent executions
π― Regression analysis: See exactly what changed between passed and failed runs
π Pass rate metrics: Calculate test stability over time
See TOOLS_CATALOG.md for example prompts!
Screenshot Analysis & Visual Forensics π― Enhanced in v4.11.0
Tool | Description | Example Usage | Best For |
| Download protected screenshots from Zebrunner with authentication |
| QA Engineers, Automation Engineers |
| Visual analysis with OCR, UI detection, and Claude Vision |
| QA Engineers, SDETs, Developers |
πΈ Enhanced! Screenshot analysis now integrated directly into
analyze_test_failureandanalyze_launch_failures- no need to call separately! See Screenshot Analysis Guide for details.
Platform & Results Analysis β Critical for Management
Tool | Description | Example Usage | Best For |
| Test results by platform/period |
| Managers, Leads |
| Most frequent defects |
| Managers, Developers |
| Detailed bug review with failure analysis, priority breakdown, and automatic detail fetching |
| Managers, QA, Developers |
| Comprehensive failure info by hashcode (alternative to auto-fetch) |
| Developers, SDETs |
| Available milestones |
| Managers, PMs |
Project Discovery
Tool | Description | Example Usage | Best For |
| Discover all accessible projects |
| All roles |
| Test API connectivity |
| All roles |
π Test Run Management
Public API Test Runs β Powerful for Analysis
Tool | Description | Example Usage | Best For |
| Advanced test run filtering |
| Managers, SDETs |
| Detailed test run information |
| Managers, QA |
| Test cases in a specific run |
| QA, Analysts |
Configuration Management
Tool | Description | Example Usage | Best For |
| Available result statuses |
| QA, SDETs |
| Configuration options |
| SDETs, Leads |
π― Management-Focused Quick Commands
π Daily Standup Reports
π Test Suite Optimization
π Weekly Management Reports
π― Milestone & Release Planning
π Issue Analysis & Troubleshooting
π Role-Specific Prompts & Workflows
π©βπ» Manual QA Engineers
Daily Test Case Review
Test Case Creation & Improvement
Test Suite Organization
Coverage Analysis
π€ Test Automation Engineers & SDETs
Automation Readiness Assessment
Test Code Generation
Coverage Analysis & Validation
Framework Integration
Batch Automation Analysis
π¨βπ» Developers
Test Case Understanding
Implementation Validation
Code Generation for Testing
Bug Analysis
π Test Managers & Team Leads
Team Quality Metrics
Test Suite Analysis
Team Performance & Planning
Process Improvement
Reporting & Stakeholder Communication
Daily Management Tasks
π’ Project Owners & Product Managers
Project Health Overview
Feature Testing Status
Quality Assurance Metrics
Risk Assessment
Planning & Resource Allocation
Executive Reporting
π Output Formats
All tools support multiple output formats:
json- Structured data (default)markdown- Rich formatted output with sections and tablesstring- Human-readable text summariesdto- Raw data objects
Example:
βοΈ Configuration Options
Environment Variables
Rules System Configuration
The rules system automatically detects and uses rules files in your project root:
Automatic Detection
If you have a mcp-zebrunner-rules.md file in your project root, the rules engine will automatically enable itself.
Custom Rules Files
You can customize the three types of rules:
Test Case Review Rules (
test_case_review_rules.md)
Analysis Checkpoints (
test_case_analysis_checkpoints.md)
Technical Rules (
mcp-zebrunner-rules.md)
π§ͺ Testing Your Setup
Run health checks
Test API connection
Run full test suite
Run specific test types
π Troubleshooting
Common Issues
"Authentication failed" or 401 errors
β Check your
ZEBRUNNER_LOGINandZEBRUNNER_TOKENβ Verify your API token is still valid
β Ensure your user has proper permissions in Zebrunner
"Project not found" or 404 errors
β Check the project key spelling (e.g., "MYAPP", not "myapp")
β Verify you have access to the project in Zebrunner
β Some endpoints may not be available on all Zebrunner instances
"Connection timeout" errors
β Check your
ZEBRUNNER_URLis correctβ Ensure your network can reach the Zebrunner instance
β Try increasing timeout in configuration
MCP integration not working
β Verify the path to
dist/server.jsis correctβ Check that the project built successfully (
npm run build)β Ensure environment variables are set in MCP configuration
β Look at Claude Desktop/Code logs for error messages
Rules engine not working
β Check that
ENABLE_RULES_ENGINE=truein your.envfileβ Verify rules files exist and have meaningful content
β Restart the MCP server after changing rules files
β Check debug logs for rules parsing errors
Debug Mode
Enable detailed logging to troubleshoot issues:
This will show:
API requests and responses
Error details and stack traces
Performance metrics
Feature availability
Rules parsing and validation details
Getting Help
Check the logs - Enable debug mode and look for error messages
Test your connection - Run
npm run test:healthVerify your configuration - Double-check your
.envfileCheck Zebrunner permissions - Ensure your user has proper access
Validate rules files - Ensure rules files have meaningful content
π― Example Workflows
Workflow 1: Test Case Review (Manual QA)
Workflow 2: Test Automation (SDET)
Workflow 3: Implementation Validation (Developer)
Workflow 4: Quality Management (Team Lead)
Workflow 5: Project Health (Product Manager)
π§ Advanced Features
Batch Operations
Process multiple test cases at once:
Custom Output Formats
Get data in the format you need:
Filtering and Search
Find exactly what you need:
Rules-Based Analysis
Leverage intelligent validation:
π Additional Documentation
π Tool References
TOOLS_CATALOG.md - π Complete catalog of all 40+ tools with natural language examples
INSTALL-GUIDE.md - π₯ Step-by-step installation and setup guide
π§ Intelligent Rules System
docs/INTELLIGENT_RULES_SYSTEM.md - π§ Complete guide to the 3-tier intelligent rules system
docs/RULES_QUICK_REFERENCE.md - β‘ Quick reference for rules system commands and configuration
π Rules Files (Customizable)
test_case_review_rules.md - π― Core quality standards and writing guidelines
test_case_analysis_checkpoints.md - β 100+ detailed validation checkpoints
mcp-zebrunner-rules.md - βοΈ Technical configuration for test generation and coverage analysis
π Specialized Guides
docs/SCREENSHOT_ANALYSIS.md - πΈ Screenshot download and visual analysis guide
change-logs.md - π Version history and feature updates
π οΈ Feature Documentation
docs/NEW_LAUNCHER_TOOL.md - Detailed information about launch and reporting tools
docs/SUITE_HIERARCHY.md - Complete guide to suite hierarchy features
docs/TEST_CASE_VALIDATION_IMPLEMENTATION.md - Test case validation system details
docs/ENHANCED_VALIDATION_FEATURES.md - Advanced validation and improvement features
docs/SCREENSHOT_ANALYSIS.md - πΈ Screenshot analysis and visual forensics guide π
π€ Contributing
Fork the repository
Create a feature branch
Make your changes with appropriate tests
Ensure all tests pass:
npm testSubmit a pull request
π License
MIT License - see LICENSE file for details.
π You're Ready!
Once you've completed the setup:
Test your connection with
npm run test:healthConfigure your AI assistant with the MCP server
Start asking questions about your test cases!
Example first commands to try:
"List test suites for project [YOUR_PROJECT_KEY]"
"Get test case [YOUR_TEST_CASE_KEY] details"
"Validate test case [YOUR_TEST_CASE_KEY]"
"Show me the test suite hierarchy"
The intelligent rules system will help ensure your test cases meet quality standards and are ready for both manual execution and automation. Happy testing! π