The Zebrunner MCP Server integrates with Zebrunner Test Case Management to enable QA teams, developers, and managers to comprehensively manage, analyze, and improve test cases through natural language commands via AI assistants.
Core Capabilities:
Test Case Management: Retrieve detailed test case information by key, title, or advanced filters (automation state, priority, dates), with support for batch operations and comprehensive pagination
Test Suite Organization: Navigate hierarchical test suite structures, get tree views with configurable depth, and identify root suites and subsuites
Quality Validation & Improvement: Validate test cases against 100+ checkpoints using a 3-tier intelligent rules system, receive AI-powered improvement suggestions, and apply automated fixes
Test Coverage & Code Generation: Analyze test case coverage against actual implementations, generate draft test code for multiple frameworks (Java/Carina, JavaScript/Jest, Python/Pytest), and perform enhanced rules-based coverage analysis
Duplicate Analysis: Identify similar test cases using both step-based similarity and advanced LLM-powered semantic analysis with detailed similarity matrices
Launch & Execution Management: Access comprehensive launch details, summaries, and test execution results with filtering by milestone, build number, or launch name
Reporting & Analytics: Platform-specific test results by time period, top bug analysis with issue links, project milestone tracking, and completion status
Configuration & Customization: Customizable rules system via Markdown files, multiple output formats (JSON, markdown, string, DTO), and clickable links to Zebrunner web UI
The server supports framework detection, intelligent validation, semantic analysis with LLM-powered clustering, and provides comprehensive test run management capabilities.
Supports Android platform test execution and results analysis through Zebrunner's test management system
Supports iOS platform test execution and results analysis through Zebrunner's test management system
Generates automated test code in JavaScript with frameworks like Jest based on test cases stored in Zebrunner
Generates Jest test code from Zebrunner test cases with intelligent framework detection and coverage analysis
Generates rich formatted reports and documentation from Zebrunner test data in Markdown format
Creates visual diagrams showing test suite hierarchies and test case relationships from Zebrunner data
Generates Python pytest code from Zebrunner test cases with automated framework detection
Generates automated test code in Python based on test cases and test execution data from Zebrunner
Generates Selenium WebDriver test automation code from Zebrunner test cases with coverage analysis
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Zebrunner MCP Servershow me test cases for the login feature that need automation"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Zebrunner MCP Server
A Model Context Protocol (MCP) server that integrates with Zebrunner Test Case Management to help QA teams manage test cases, test suites, and test execution data through AI assistants like Claude.
๐ Need help with installation? Check out our Step-by-Step Install Guide for detailed setup instructions.
๐ Installing via npm? See our MCP NPM Installation Guide for Claude Desktop, Cursor, IntelliJ IDEA, and ChatGPT Desktop configuration.
๐ Table of Contents
๐ฏ What is this tool?
This tool allows you to:
Retrieve test cases and test suites from Zebrunner
Analyze test coverage and generate test code
Get test execution results and launch details
Validate test case quality with automated checks using intelligent rules
Generate reports and insights from your test data
Improve test cases with AI-powered suggestions and automated fixes
All through natural language commands in AI assistants!
๐ง Intelligent Rules System
What Makes This Tool Special
Our MCP server includes a sophisticated 3-tier rules system that transforms how you work with test cases:
๐ฏ Test Case Review Rules (test_case_review_rules.md)
Purpose: Core quality standards and writing guidelines
What it does: Defines fundamental principles for writing high-quality test cases
Key areas: Independence, single responsibility, comprehensive preconditions, complete step coverage
Used by:
validate_test_caseandimprove_test_casetools
โ
Test Case Analysis Checkpoints (test_case_analysis_checkpoints.md)
Purpose: Detailed validation checklist with 100+ checkpoints
What it does: Provides granular validation criteria for thorough test case analysis
Key areas: Structure validation, automation readiness, platform considerations, quality assurance
Used by:
validate_test_casefor comprehensive scoring and issue detection
โ๏ธ MCP Zebrunner Rules (mcp-zebrunner-rules.md)
Purpose: Technical configuration for test generation and coverage analysis
What it does: Defines framework detection patterns, code templates, and coverage thresholds
Key areas: Framework detection, test generation templates, coverage thresholds, quality standards
Used by:
generate_draft_test_by_keyandget_enhanced_test_coverage_with_rulestools
How the Rules Work Together
graph TD
A[Test Case] --> B[validate_test_case]
B --> C[test_case_review_rules.md]
B --> D[test_case_analysis_checkpoints.md]
B --> E[Validation Result + Issues]
E --> F[improve_test_case]
F --> G[AI-Powered Improvements]
A --> H[generate_draft_test_by_key]
H --> I[mcp-zebrunner-rules.md]
H --> J[Generated Test Code]
A --> K[get_enhanced_test_coverage_with_rules]
K --> I
K --> L[Coverage Analysis + Rules Validation]Why This Matters
Consistency: All team members follow the same quality standards
Automation: Reduce manual review time with automated validation
Learning: New team members learn best practices through AI feedback
Customization: Adapt rules to your project's specific needs
Continuous Improvement: AI suggests improvements based on proven patterns
Customizing Rules for Your Project
You can customize any of the three rules files:
# Copy default rules to customize
cp test_case_review_rules.md my-project-review-rules.md
cp test_case_analysis_checkpoints.md my-project-checkpoints.md
cp mcp-zebrunner-rules.md my-project-technical-rules.md
# Use custom rules in validation
"Validate test case PROJ-123 using custom rules from my-project-review-rules.md"Example customizations:
Mobile projects: Add mobile-specific validation rules
API projects: Focus on API testing patterns and data validation
Different frameworks: Customize code generation templates
Company standards: Align with your organization's testing guidelines
๐ Prerequisites
What you need to know
Basic command line usage (opening terminal, running commands)
Your Zebrunner credentials (login and API token)
Basic understanding of test management (test cases, test suites)
Software requirements
Node.js 18 or newer - Download here
npm (comes with Node.js)
Access to a Zebrunner instance with API credentials
How to check if you have Node.js
Open your terminal/command prompt and run:
node --version
npm --versionIf you see version numbers, you're ready to go!
๐ Quick Start Guide
๐ก Want more detailed instructions? Check out our More Detailed Step-by-step Install Guide with troubleshooting tips and platform-specific instructions.
Step 1: Get the code
Choose one of these methods:
Option A: Clone from repository (recommended)
git clone https://github.com/maksimsarychau/mcp-zebrunner.git
cd mcp-zebrunnerOption B: Download and extract
Download the project files and extract them to a folder.
Step 2: Install dependencies
npm installStep 3: Configure your Zebrunner connection
Create a .env file in the project folder with your Zebrunner details:
# Your Zebrunner instance URL (without trailing slash)
ZEBRUNNER_URL=https://your-company.zebrunner.com/api/public/v1
# Your Zebrunner login (usually your email)
ZEBRUNNER_LOGIN=your.email@company.com
# Your Zebrunner API token (get this from your Zebrunner profile)
ZEBRUNNER_TOKEN=your_api_token_here
# Optional: Enable debug logging (default: false)
DEBUG=false
# Optional: Enable intelligent rules system (auto-detected if rules file exists)
ENABLE_RULES_ENGINE=trueHow to get your Zebrunner API token:
Log into your Zebrunner instance
Go to your profile settings
Find the "API Access" section
Generate a new API token
Copy the token to your
.envfile
Step 4: Build the project
npm run buildStep 5: Test your connection
npm run test:healthIf you see "โ Health check completed", you're ready to go!
๐ Updating to New Version
Check current version
# Check your current version
npm run version
# or manually check package.json
cat package.json | grep '"version"'Update steps
# 1. Pull latest changes from master branch
git pull origin master
# 2. Install any new dependencies
npm install
# 3. Rebuild the project
npm run build
# 4. Test your connection (requires valid .env file)
npm run test:healthImportant Notes:
โ Your for the health check to work
โ Restart Claude Desktop/Code after updating to reload the MCP server
โ Check release notes for any breaking changes before updating
If the health check fails, verify your .env configuration and Zebrunner credentials.
๐ง Usage Methods
Method 1: Use with Claude Desktop/Code (Recommended)
Add this configuration to your Claude Desktop or Claude Code settings. Important: You must use the full absolute path to your project folder.
{
"mcpServers": {
"mcp-zebrunner": {
"command": "node",
"args": ["/full/absolute/path/to/mcp-zebrunner/dist/server.js"],
"env": {
"ZEBRUNNER_URL": "https://your-company.zebrunner.com/api/public/v1",
"ZEBRUNNER_LOGIN": "your.email@company.com",
"ZEBRUNNER_TOKEN": "your_api_token_here",
"DEBUG": "false",
"ENABLE_RULES_ENGINE": "true",
"DEFAULT_PAGE_SIZE": "100",
"MAX_PAGE_SIZE": "100"
}
}
}
}Example paths:
Windows:
C:\\Users\\YourName\\Projects\\mcp-zebrunner\\dist\\server.jsmacOS/Linux:
/Users/YourName/Projects/mcp-zebrunner/dist/server.js
Alternative: Command Line Integration (Claude Code)
You can also add the server using the command line:
claude mcp add mcp-zebrunner \
--env ZEBRUNNER_URL="https://your-company.zebrunner.com/api/public/v1" \
--env ZEBRUNNER_LOGIN="your.email@company.com" \
--env ZEBRUNNER_TOKEN="your_api_token_here" \
--env DEBUG="false" \
--env ENABLE_RULES_ENGINE="true" \
-- node /full/absolute/path/to/mcp-zebrunner/dist/server.jsImportant: Replace /full/absolute/path/to/mcp-zebrunner/ with the actual full path to your project folder.
Method 2: Run as standalone server
Development mode (with auto-reload)
npm run devProduction mode
npm startMethod 3: Smart URL-Based Analysis ๐
NEW in v5.4.1+: Claude can automatically detect Zebrunner URLs and analyze them with optimal settings!
Just paste a Zebrunner URL in your conversation, and Claude will automatically:
Parse the URL to extract project, launch, and test IDs
Call the appropriate analysis tool
Use recommended settings (videos, screenshots, AI analysis enabled)
๐ Supported URL Patterns
1. Test Analysis URLs
https://your-workspace.zebrunner.com/projects/PROJECT/automation-launches/LAUNCH_ID/tests/TEST_IDWhat happens:
Claude automatically calls
analyze_test_failureExtracts:
projectKey,testRunId(launch ID),testIdEnables:
includeVideo: true,analyzeScreenshotsWithAI: true, all diagnostics
Example:
User: "Analyze https://your-workspace.zebrunner.com/projects/MCP/automation-launches/120911/tests/5455386"
Claude automatically calls:
{
projectKey: "MCP",
testRunId: 120911,
testId: 5455386,
includeVideo: true,
analyzeScreenshotsWithAI: true,
includeLogs: true,
includeScreenshots: true,
analyzeSimilarFailures: true,
screenshotAnalysisType: "detailed",
format: "detailed"
}2. Launch Analysis URLs
https://your-workspace.zebrunner.com/projects/PROJECT/automation-launches/LAUNCH_IDWhat happens:
Claude automatically calls
detailed_analyze_launch_failuresExtracts:
projectKey,testRunId(launch ID)Enables:
includeScreenshotAnalysis: true, comprehensive analysis
Example:
User: "Analyze https://your-workspace.zebrunner.com/projects/MCP/automation-launches/120911"
Claude automatically calls:
{
projectKey: "MCP",
testRunId: 120911,
filterType: "without_issues",
includeScreenshotAnalysis: true,
screenshotAnalysisType: "detailed",
format: "summary",
executionMode: "sequential"
}โจ Advanced Usage
Override Default Settings
Claude understands natural language overrides:
User: "Analyze https://...url... but without screenshots"
โ Claude sets: analyzeScreenshotsWithAI: false
User: "Analyze https://...url... in jira format"
โ Claude sets: format: "jira"
User: "Quick analysis of https://...url..."
โ Claude sets: format: "summary", screenshotAnalysisType: "basic"Multiple URLs
Analyze multiple tests/launches in one request:
User: "Compare these failures:
https://your-workspace.zebrunner.com/projects/MCP/automation-launches/120911/tests/5455386
https://your-workspace.zebrunner.com/projects/MCP/automation-launches/120911/tests/5455390"
โ Claude analyzes both sequentially and compares resultsCross-Workspace Support
โ ๏ธ URLs from different workspaces will show a warning but still attempt analysis:
User: "Analyze https://other-workspace.zebrunner.com/..."
โ Claude warns: "URL is from 'other-workspace.zebrunner.com' but configured workspace is 'your-workspace.zebrunner.com'"
โ Proceeds with analysis using available credentials๐ URL Pattern Reference
Component | Example | Extracted As | Used In Tool |
Workspace |
| Validation only | N/A |
Project Key |
|
| All tools |
Launch ID |
|
| All tools |
Test ID |
|
|
|
๐ฏ Why Use URL-Based Analysis?
โ
Faster: No need to manually specify IDs
โ
Convenient: Copy-paste URLs directly from Zebrunner UI
โ
Optimized: Automatic use of recommended settings
โ
Smart: Claude detects intent and adjusts parameters
โ
Flexible: Natural language overrides work seamlessly
๐ก Pro Tips
Direct from Zebrunner: Copy URL directly from your browser while viewing a test/launch
Batch Analysis: Paste multiple URLs separated by newlines
Custom Settings: Add natural language instructions to override defaults
Quick Checks: URLs work great for quick "what happened here?" questions
Reports: Combine with format requests: "Generate JIRA ticket for https://...url..."
๐ ๏ธ Available Tools
Once connected, you can use these tools through natural language in your AI assistant. Here's a comprehensive reference of all 33+ available tools organized by category:
๐ Test Case Management
Core Test Case Tools
Tool | Description | Example Usage | Best For |
| Get detailed test case information |
| All roles |
| Advanced filtering with automation states, dates |
| QA, SDETs |
| Filter by specific automation states |
| SDETs, Managers |
| Search test cases by title (partial match) |
| All roles |
| Advanced filtering by suite, dates, priority, automation state |
| QA, Managers |
| List available automation states |
| All roles |
| List available priorities with IDs |
| All roles |
Batch Test Case Operations
Tool | Description | Example Usage | Best For |
| Get ALL test cases (handles pagination) |
| Managers, Leads |
| All test cases with hierarchy info |
| Analysts |
๐ณ Test Suite Hierarchy & Organization
Suite Management
Tool | Description | Example Usage | Best For |
| List suites with pagination |
| All roles |
| Hierarchical tree view |
| Managers, QA |
| Get top-level suites |
| Managers |
| Get all child suites |
| QA, Analysts |
Suite Analysis Tools
Tool | Description | Example Usage | Best For |
| Find specific suite by ID |
| All roles |
| Comprehensive suite listing |
| Managers |
| Find root suite for any suite |
| Analysts |
๐ Test Coverage & Analysis
Coverage Analysis
Tool | Description | Example Usage | Best For |
| Analyze implementation coverage |
| Developers, SDETs |
| Rules-based coverage analysis |
| SDETs, Leads |
Duplicate Analysis
Tool | Description | Example Usage | Best For |
| Find and group similar test cases by step similarity |
| QA Managers, SDETs |
| Advanced semantic analysis with LLM-powered step clustering |
| Senior QA, Test Architects |
๐ Clickable Links Feature: Both duplicate analysis tools support clickable links to Zebrunner web UI:
Add
include_clickable_links: trueto make test case keys clickable in markdown outputJSON/DTO formats automatically include
webUrlfields when enabledLinks are generated from your
ZEBRUNNER_URLenvironment variableExample:
"Analyze suite 17585 for duplicates with clickable links enabled"
๐งช Test Code Generation & Validation
AI-Powered Tools
Tool | Description | Example Usage | Best For |
| Generate test code with framework detection |
| SDETs, Developers |
| Quality validation with improvement |
| QA, Managers |
| Dedicated improvement tool |
| QA, SDETs |
๐ Launch & Execution Management
Launch Operations โญ Essential for Managers
Tool | Description | Example Usage | Best For |
| Comprehensive launch information |
| Managers, Leads |
| Quick launch overview |
| Managers |
| All launches with pagination |
| Managers, Leads |
| Filter by milestone/build |
| Managers, Leads |
| Weekly regression stability report with WoW delta, linked issues, and strict Jira-ready output. Supports launch list or build-based auto-discovery (version-segment build lookup with |
| Managers, Leads |
๐ Reporting & Analytics
Test Failure Analysis ๐ Game Changer
Tool | Description | Example Usage | Best For |
| Deep forensic analysis of failed tests with logs, screenshots, error classification, and recommendations. NEW in v5.11.0: Compare with last passed execution! Shows what changed (logs, duration, environment). Also: |
| QA Engineers, SDETs, Managers |
| ๐ NEW in v5.11.0! Track test execution trends across launches. View pass/fail history, find last passed execution, calculate pass rate. Critical Detection: Highlights when test failed in all recent runs! |
| QA Engineers, SDETs, Managers |
| ๐ Enhanced v4.12.1 Analyze failures WITHOUT linked issues with Claude-level intelligence + Jira format support. Auto-deep-dive with executive summary, timeline, patterns, priorities. NEW: Generate Jira-ready tickets for entire launches! |
| QA Managers, SDETs, Team Leads |
โ FIXED in v5.2.4! Improved Reliability & Video Links
๐ฅ Video URLs fixed: Now uses test-sessions API (
/api/reporting/v1/launches/{id}/test-sessions) for reliable video artifact extraction๐ก๏ธ Comprehensive error handling: Gracefully handles missing screenshots/logs (returns empty arrays instead of throwing)
๐ซ No more "no result received" errors: All API calls have proper try-catch blocks with fallbacks
๐ Better debugging: Enhanced logging when
debug: trueis enabled in config๐ Schema updates: Supports both old and new API structures for backward compatibility
๐ซ NEW in v4.12.1! Jira-Ready Ticket Format
Use
format: 'jira'to generate ready-to-paste Jira ticketsAuto-calculated priority based on stability and impact
Smart labels:
test-automation,locator-issue,flaky-test, etc.Complete Jira markup: Tables, panels, code blocks, clickable links
๐ฅ Prominent video links: Beautiful panels + links section
Copy-paste ready: No manual formatting needed
Saves 5-10 minutes per ticket with consistent quality
๐ง Enhanced in v4.11.1!
detailed_analyze_launch_failuresprovides automatic deep synthesis like Claude would manually provide:
๐ฏ Executive Summary: Key findings, patterns, and stability indicators
๐ Timeline Analysis: When failures first appeared, progression tracking
๐ฌ Pattern Analysis: Groups by root cause with affected tests and stability %
๐ฏ Priority-Based Recommendations: ๐ด HIGH / ๐ก MEDIUM / ๐ข LOW with impact analysis
๐ Enhanced Test Details: Full error messages, stack traces, timestamps
โ Smart Follow-up Questions: Guides next investigation steps
Smart filtering: Analyzes only tests WITHOUT linked issues by default
Optional AI screenshot analysis for all tests
No manual follow-up needed - get complete picture in one call!
๐ NEW in v5.11.0! Test Execution History & Comparison
๐ Track execution trends: View pass/fail history across launches with
get_test_execution_history๐ Compare with last passed: New
compareWithLastPassedparameter inanalyze_test_failure
Compare logs (new errors detection)
Compare duration (performance regression)
Compare environment (device/platform changes)
Compare screenshots (visual differences)
โ ๏ธ Critical detection: Automatically highlights when test failed in all recent executions
๐ฏ Regression analysis: See exactly what changed between passed and failed runs
๐ Pass rate metrics: Calculate test stability over time
See TOOLS_CATALOG.md for example prompts!
Screenshot Analysis & Visual Forensics ๐ฏ Enhanced in v4.11.0
Tool | Description | Example Usage | Best For |
| Download protected screenshots from Zebrunner with authentication |
| QA Engineers, Automation Engineers |
| Visual analysis with OCR, UI detection, and Claude Vision |
| QA Engineers, SDETs, Developers |
๐ธ Enhanced! Screenshot analysis now integrated directly into
analyze_test_failureandanalyze_launch_failures- no need to call separately! See Screenshot Analysis Guide for details.
Platform & Results Analysis โญ Critical for Management
Tool | Description | Example Usage | Best For |
| Test results by platform/period |
| Managers, Leads |
| Most frequent defects |
| Managers, Developers |
| Detailed bug review with failure analysis, priority breakdown, and automatic detail fetching |
| Managers, QA, Developers |
| Comprehensive failure info by hashcode (alternative to auto-fetch) |
| Developers, SDETs |
| Available milestones |
| Managers, PMs |
Project Discovery
Tool | Description | Example Usage | Best For |
| Discover all accessible projects |
| All roles |
| Test API connectivity |
| All roles |
๐ Test Run Management
Public API Test Runs โญ Powerful for Analysis
Tool | Description | Example Usage | Best For |
| Advanced test run filtering |
| Managers, SDETs |
| Detailed test run information |
| Managers, QA |
| Test cases in a specific run |
| QA, Analysts |
Configuration Management
Tool | Description | Example Usage | Best For |
| Available result statuses |
| QA, SDETs |
| Configuration options |
| SDETs, Leads |
๐ฏ Management-Focused Quick Commands
๐ Daily Standup Reports
# Get yesterday's results
"Get platform results for last 7 days for project MCP"
# Check recent failures
"Show me top 5 bugs from last week"
# Review recent launches
"Get all launches for project MCP from last 3 days"๐ Test Suite Optimization
# Basic duplicate analysis
"Analyze suite 12345 for duplicates with 80% similarity threshold"
# Advanced semantic analysis with step clustering
"Semantic analysis of suite 12345 with 85% step clustering and medoid selection"
# Analyze specific test cases for duplicates
"Analyze test cases MCP-123, MCP-124, MCP-125 for duplicates"
# Project-wide duplicate analysis (use with caution - large datasets)
"Analyze project MCP for test case duplicates with 85% similarity"
# Get detailed similarity matrix with pattern types
"Analyze suite 12345 for duplicates with similarity matrix included"
# Two-phase clustering with semantic insights
"Semantic duplicate analysis with step clustering threshold 90% and insights enabled"
# Enable clickable links for easy navigation
"Analyze suite 17585 for duplicates with clickable links enabled"๐ Weekly Management Reports
# Comprehensive project health
"Get all launches for project MCP with milestone filter"
# Platform performance analysis
"Get iOS and Android test results for the last month"
# Quality metrics
"Get all test cases by automation state for project MCP"๐ฏ Milestone & Release Planning
# Milestone tracking
"Get project milestones for MCP with completion status"
# Build-specific results
"Get launches for build 'mcp-app-2.1.0-release' and milestone '2.1.0'"
# Release readiness
"Get automation readiness for all test cases in project MCP"๐ Issue Analysis & Troubleshooting
# Bug analysis
"Show me top 10 most frequent bugs with issue links"
# Failure investigation
"Get test run 12345 details with all test cases"
# Platform-specific issues
"Get Android test results for last 7 days with failure analysis"๐ญ Role-Specific Prompts & Workflows
๐ฉโ๐ป Manual QA Engineers
Daily Test Case Review
"Get test case MCP-45 details and validate its quality"
"Show me all test cases in suite 18708 that need improvement"
"Validate test case MCP-67 and suggest specific improvements"
"Find test cases with title containing 'login' to review authentication tests"
"Get test cases from suite 491 with high priority for today's testing"Test Case Creation & Improvement
"I'm writing a test case for login functionality. What should I include based on our quality standards?"
"Improve test case MCP-89 - it's missing some preconditions"
"Check if test case MCP-12 is ready for manual execution"Test Suite Organization
"Show me the hierarchy of test suites for project MYAPP to understand the structure"
"Get all subsuites from Authentication suite to review test coverage"
"List test cases in suite 18708 and identify which ones need validation"
"Find test cases with title containing 'payment' to organize payment testing"
"Get all high priority test cases from suite 491 for release testing"Coverage Analysis
"I executed test case MCP-34 manually. Here's what I did: [paste your execution notes]. Analyze coverage against the documented steps."
"Compare test case MCP-56 with this manual testing session: [paste session details]"๐ค Test Automation Engineers & SDETs
Automation Readiness Assessment
"Validate test case MCP-78 for automation readiness"
"Get all test cases in suite 18708 and identify which ones are ready for automation"
"Check test case MCP-23 - does it have clear, unambiguous steps for automation?"
"Find test cases with title containing 'API' to prioritize API automation"
"Get automation priorities to understand which test cases to automate first"
"Get test cases from suite 491 with 'Not Automated' state for automation planning"Test Code Generation
"Generate Java/Carina test code for MCP-45 based on this existing framework: [paste framework code]"
"Create JavaScript/Jest test for MCP-67 using this test structure: [paste test example]"
"Generate Python/Pytest code for MCP-89 with these page objects: [paste page object code]"Coverage Analysis & Validation
"Analyze test coverage for MCP-34 against this automated test: [paste test code]"
"Enhanced coverage analysis for MCP-56 with rules validation - here's my implementation: [paste code]"
"Compare test case MCP-78 steps with this Selenium test: [paste selenium code]"Framework Integration
"Generate test code for MCP-45 using our Carina framework with these page objects: [paste existing code]"
"Create test automation for MCP-67 that integrates with this CI/CD pipeline: [paste pipeline config]"
"Generate API test for MCP-89 using this RestAssured setup: [paste API test framework]"Batch Automation Analysis
"Validate all test cases in Authentication suite for automation readiness"
"Generate coverage report for all test cases in project MYAPP"
"Identify test cases in suite 18708 that have automation blockers"
"Find test cases with title containing 'regression' for automation sprint planning"
"Get test cases from suite 491 created after 2025-01-01 with high priority for next automation cycle"
"Get automation priorities and states to create automation roadmap"๐จโ๐ป Developers
Test Case Understanding
"Get test case MCP-45 details to understand what I need to implement"
"Show me test cases related to login functionality in project MYAPP"
"Explain test case MCP-67 requirements in developer-friendly format"
"Find test cases with title containing 'authentication' for my feature development"
"Get high priority test cases from suite 491 that I need to implement"Implementation Validation
"I implemented this feature: [paste code]. Analyze coverage against test case MCP-34"
"Here's my API implementation: [paste code]. Check coverage against test case MCP-56"
"Validate my UI implementation against test case MCP-78: [paste component code]"Code Generation for Testing
"Generate unit tests for test case MCP-45 using Jest framework"
"Create integration tests for MCP-67 based on this API: [paste API code]"
"Generate test data setup for MCP-89 using this database schema: [paste schema]"Bug Analysis
"Get test execution results for launch 118685 to understand recent failures"
"Show me top bugs from last week related to my feature area"
"Get detailed bug review for Android project from last 14 days"
"Show me comprehensive failure information for hashcode 1051677506"
"What are the top 50 bugs affecting our project this month?"
"Give me a summary of bug failures with reproduction dates from last 7 days"
"Analyze test case MCP-34 - why might it be failing in automation?"๐ Test Managers & Team Leads
Team Quality Metrics
"Get quality metrics for all test cases in project MYAPP"
"Show me test cases that need improvement in suite 18708"
"Generate quality report for test cases created this month"
"Find test cases with title containing 'critical' to assess critical path quality"
"Get automation priorities to align team efforts with business priorities"
"Get test cases from suite 491 with high priority that need quality improvements"Test Suite Analysis
"Show me the complete test suite hierarchy for project MYAPP"
"Analyze test coverage across all suites in project MYAPP"
"Get automation readiness status for all test cases in Authentication suite"Team Performance & Planning
"Get test execution results by platform for the last 30 days"
"Show me top 10 most frequent bugs to prioritize fixes"
"Analyze test case quality trends in project MYAPP"
"Get all launches for project MYAPP from last 30 days with milestone tracking"
"Show me platform results for last 7 days to track team performance"
"Get test runs with status 'FAILED' from last week for team retrospective"Process Improvement
"Validate all test cases in suite 18708 to identify common quality issues"
"Generate improvement recommendations for test cases created by junior team members"
"Analyze which test cases are consistently failing automation"
"Get top 10 bugs from last month to identify process improvements"
"Show me test runs with detailed failure analysis for process optimization"
"Get automation readiness metrics across all test cases"
"Find test cases with title containing 'flaky' to address test stability"
"Get test cases from suite 491 with medium priority that could be automated"
"Get automation priorities to optimize team resource allocation"Reporting & Stakeholder Communication
"Generate comprehensive test coverage report for project MYAPP in markdown format"
"Get test execution summary for launch 118685 for stakeholder presentation"
"Show me test quality metrics and improvement suggestions for quarterly review"
"Get platform results by period for executive dashboard"
"Create milestone progress report with test execution data"
"Generate weekly team performance report with launch and bug metrics"Daily Management Tasks
"Get all launches for project MYAPP from yesterday"
"Show me top 5 bugs from last 7 days with issue links"
"Get platform results for iOS and Android from last week"
"Check automation readiness for upcoming release milestone"
"Get test run details for failed runs from last 24 hours"
"Show me project milestones and their completion status"๐ข Project Owners & Product Managers
Project Health Overview
"Get overall test coverage status for project MYAPP"
"Show me test execution results by platform for the last quarter"
"Generate project testing health report in markdown format"
"Get all launches for project MYAPP with milestone and build tracking"
"Show me platform results summary for executive review"
"Get project milestones with completion status and testing metrics"Feature Testing Status
"Get test cases related to [feature name] in project MYAPP"
"Show me test execution results for [feature name] functionality"
"Analyze test coverage for [epic/story] requirements"
"Get launches filtered by milestone for feature release tracking"
"Show me test runs for specific build versions"Quality Assurance Metrics
"Get quality metrics for all test cases in project MYAPP"
"Show me test case validation results and improvement areas"
"Generate testing quality report for stakeholder presentation"
"Get top bugs analysis for quality trend assessment"
"Show me automation vs manual testing ratio across the project"
"Find test cases with title containing 'smoke' to assess smoke test coverage"
"Get automation priorities to communicate testing strategy to stakeholders"
"Get test cases from suite 491 with critical priority for risk assessment"Risk Assessment
"Show me top 10 most frequent bugs in project MYAPP"
"Get test cases that are not ready for automation and assess risk"
"Analyze test execution trends to identify potential quality risks"
"Get platform-specific failure rates for the last month"
"Show me test runs with high failure rates for risk mitigation"
"Get milestone-based testing progress for release risk assessment"Planning & Resource Allocation
"Get automation readiness assessment for all test cases in project MYAPP"
"Show me test cases that need quality improvement and estimate effort"
"Analyze test suite structure to identify optimization opportunities"
"Get testing resource utilization by platform and time period"
"Show me milestone testing progress for sprint planning"
"Get comprehensive launch analysis for capacity planning"
"Find test cases with title containing 'performance' to plan performance testing"
"Get automation priorities to allocate automation resources effectively"
"Get test cases from suite 491 created in last month to plan review sessions"Executive Reporting
"Generate executive dashboard with platform results and bug trends"
"Get quarterly testing metrics with milestone progress"
"Show me ROI analysis of automation vs manual testing efforts"
"Create board-ready testing status report with key metrics"
"Get testing velocity trends for project timeline assessment"๐ Output Formats
All tools support multiple output formats:
json- Structured data (default)markdown- Rich formatted output with sections and tablesstring- Human-readable text summariesdto- Raw data objects
Example:
"Get test case PROJ-123 in markdown format"
"Show me test suites as JSON"โ๏ธ Configuration Options
Environment Variables
# Required
ZEBRUNNER_URL=https://your-instance.zebrunner.com/api/public/v1
ZEBRUNNER_LOGIN=your.email@company.com
ZEBRUNNER_TOKEN=your_api_token
# Optional - Basic Settings
DEBUG=false # Enable detailed logging (default: false)
DEFAULT_PAGE_SIZE=100 # Default items per page (optional)
MAX_PAGE_SIZE=100 # Maximum items per page (optional)
# Optional - Intelligent Rules System
ENABLE_RULES_ENGINE=true # Enable intelligent rules (auto-detected if rules file exists)
MCP_RULES_FILE=custom-rules.md # Custom technical rules file (optional)
MIN_COVERAGE_THRESHOLD=70 # Minimum coverage percentage (optional)
REQUIRE_UI_VALIDATION=true # Require UI validation in tests (optional)
REQUIRE_API_VALIDATION=true # Require API validation in tests (optional)Rules System Configuration
The rules system automatically detects and uses rules files in your project root:
Automatic Detection
If you have a mcp-zebrunner-rules.md file in your project root, the rules engine will automatically enable itself.
Custom Rules Files
You can customize the three types of rules:
Test Case Review Rules (
test_case_review_rules.md)
# Custom Test Case Review Rules
## Rule 1: Title Quality
- Titles must be descriptive and specific
- Minimum length: 10 characters
- Should not contain vague terms like "test", "check"
## Rule 2: Test Steps
- Each step must have clear action and expected result
- Steps should be numbered and sequential
- Avoid combining multiple actions in one stepAnalysis Checkpoints (
test_case_analysis_checkpoints.md)
# Custom Analysis Checkpoints
## Independence Assessment
- [ ] Can this test case run independently?
- [ ] Are all preconditions explicitly stated?
- [ ] No dependencies on other test cases?
## Automation Readiness
- [ ] All steps are unambiguous?
- [ ] Technical feasibility confirmed?
- [ ] Stable selectors available?Technical Rules (
mcp-zebrunner-rules.md)
# Technical Configuration
## Coverage Thresholds
- Overall Coverage: 80%
- Critical Steps: 95%
- UI Validation Steps: 85%
## Framework Detection
**Java/TestNG**:
- Keywords: @Test, TestNG, WebDriver
- File patterns: *Test.java, *Tests.java๐งช Testing Your Setup
Run health checks
npm run test:healthTest API connection
npm run smokeRun full test suite
npm testRun specific test types
npm run test:unit # Fast unit tests
npm run test:integration # API integration tests
npm run test:e2e # End-to-end tests๐ Troubleshooting
Common Issues
"Authentication failed" or 401 errors
โ Check your
ZEBRUNNER_LOGINandZEBRUNNER_TOKENโ Verify your API token is still valid
โ Ensure your user has proper permissions in Zebrunner
"Project not found" or 404 errors
โ Check the project key spelling (e.g., "MYAPP", not "myapp")
โ Verify you have access to the project in Zebrunner
โ Some endpoints may not be available on all Zebrunner instances
"Connection timeout" errors
โ Check your
ZEBRUNNER_URLis correctโ Ensure your network can reach the Zebrunner instance
โ Try increasing timeout in configuration
MCP integration not working
โ Verify the path to
dist/server.jsis correctโ Check that the project built successfully (
npm run build)โ Ensure environment variables are set in MCP configuration
โ Look at Claude Desktop/Code logs for error messages
Rules engine not working
โ Check that
ENABLE_RULES_ENGINE=truein your.envfileโ Verify rules files exist and have meaningful content
โ Restart the MCP server after changing rules files
โ Check debug logs for rules parsing errors
Debug Mode
Enable detailed logging to troubleshoot issues:
DEBUG=trueThis will show:
API requests and responses
Error details and stack traces
Performance metrics
Feature availability
Rules parsing and validation details
Getting Help
Check the logs - Enable debug mode and look for error messages
Test your connection - Run
npm run test:healthVerify your configuration - Double-check your
.envfileCheck Zebrunner permissions - Ensure your user has proper access
Validate rules files - Ensure rules files have meaningful content
๐ฏ Example Workflows
Workflow 1: Test Case Review (Manual QA)
1. "Get test case PROJ-123 details"
2. "Validate test case PROJ-123"
3. "Improve test case PROJ-123 with specific suggestions"
4. "Check if test case PROJ-123 is ready for manual execution"Workflow 2: Test Automation (SDET)
1. "Validate test case PROJ-456 for automation readiness"
2. "Generate Java/Carina test code for PROJ-456"
3. "Analyze coverage between test case and my implementation"
4. "Get automation readiness assessment"Workflow 3: Implementation Validation (Developer)
1. "Get test case PROJ-789 details to understand requirements"
2. "Analyze coverage for PROJ-789 against my implementation"
3. "Generate unit tests based on test case requirements"
4. "Validate implementation completeness"Workflow 4: Quality Management (Team Lead)
1. "Get quality metrics for all test cases in project MYAPP"
2. "Show me test cases that need improvement"
3. "Generate team quality report"
4. "Identify automation readiness across the project"Workflow 5: Project Health (Product Manager)
1. "Get overall test coverage status for project MYAPP"
2. "Show me test execution results by platform"
3. "Generate project testing health report"
4. "Identify quality risks and improvement opportunities"๐ง Advanced Features
Batch Operations
Process multiple test cases at once:
"Validate all test cases in suite 18708"
"Generate coverage report for all test cases in project MYAPP"
"Improve all test cases that have quality issues"Custom Output Formats
Get data in the format you need:
"Get test cases as JSON for API integration"
"Show test suite hierarchy in markdown for documentation"
"Generate quality report in markdown for stakeholder presentation"Filtering and Search
Find exactly what you need:
"Get test cases created after 2025-01-01"
"Find test cases with automation state 'Manual'"
"Show me test cases that are not ready for automation"Rules-Based Analysis
Leverage intelligent validation:
"Validate test case PROJ-123 using custom rules from my-project-rules.md"
"Enhanced coverage analysis with framework-specific rules"
"Generate improvement suggestions based on team quality standards"๐ Additional Documentation
๐ Tool References
TOOLS_CATALOG.md - ๐ Complete catalog of all 40+ tools with natural language examples
INSTALL-GUIDE.md - ๐ฅ Step-by-step installation and setup guide
๐ง Intelligent Rules System
docs/INTELLIGENT_RULES_SYSTEM.md - ๐ง Complete guide to the 3-tier intelligent rules system
docs/RULES_QUICK_REFERENCE.md - โก Quick reference for rules system commands and configuration
๐ Rules Files (Customizable)
test_case_review_rules.md - ๐ฏ Core quality standards and writing guidelines
test_case_analysis_checkpoints.md - โ 100+ detailed validation checkpoints
mcp-zebrunner-rules.md - โ๏ธ Technical configuration for test generation and coverage analysis
๐ Specialized Guides
docs/SCREENSHOT_ANALYSIS.md - ๐ธ Screenshot download and visual analysis guide
change-logs.md - ๐ Version history and feature updates
๐ ๏ธ Feature Documentation
docs/NEW_LAUNCHER_TOOL.md - Detailed information about launch and reporting tools
docs/SUITE_HIERARCHY.md - Complete guide to suite hierarchy features
docs/TEST_CASE_VALIDATION_IMPLEMENTATION.md - Test case validation system details
docs/ENHANCED_VALIDATION_FEATURES.md - Advanced validation and improvement features
docs/SCREENSHOT_ANALYSIS.md - ๐ธ Screenshot analysis and visual forensics guide ๐
๐ค Contributing
Fork the repository
Create a feature branch
Make your changes with appropriate tests
Ensure all tests pass:
npm testSubmit a pull request
๐ License
License
This project is licensed under AGPL-3.0 to ensure that all improvements to the MCP server remain open-source, especially when the software is deployed as a network service.
If you modify and run the server in a way that users interact with it over a network (e.g., Claude Desktop / Cursor clients), you must make the full modified source code available to those users.
Commercial usage is allowed. For closed-source modifications or enterprise licensing, please contact the author. See LICENSE.md file for details.
๐ You're Ready!
Once you've completed the setup:
Test your connection with
npm run test:healthConfigure your AI assistant with the MCP server
Start asking questions about your test cases!
Example first commands to try:
"List test suites for project [YOUR_PROJECT_KEY]"
"Get test case [YOUR_TEST_CASE_KEY] details"
"Validate test case [YOUR_TEST_CASE_KEY]"
"Show me the test suite hierarchy"
The intelligent rules system will help ensure your test cases meet quality standards and are ready for both manual execution and automation. Happy testing! ๐