Skip to main content
Glama

Zebrunner MCP Server

test_case_analysis_checkpoints.mdโ€ข7.01 kB
# Test Case Analysis Checkpoints ## Pre-Analysis Setup ### โ˜ Document Structure Validation - [ ] **Logical Hierarchy**: Sections/Subsections follow logical flow - [ ] **Clear Naming**: All sections have descriptive, consistent names - [ ] **Complete Documentation**: Main section includes requirements links, visualization links, split info, restrictions - [ ] **Reference Validity**: All linked documents and resources are accessible ## Core Test Case Analysis Checkpoints ### โ˜ Independence Assessment - [ ] **Standalone Execution**: Can this test case run independently without other test cases? - [ ] **Complete Preconditions**: All necessary setup conditions explicitly stated? - [ ] **No External Dependencies**: Test case doesn't rely on state from previous test cases? - [ ] **Self-Sufficient**: All required data and configurations included? ### โ˜ Scope and Responsibility - [ ] **Single Focus**: Does the test case validate one specific scenario only? - [ ] **Clear Objective**: Is the main testing goal immediately apparent? - [ ] **Appropriate Scope**: Not too narrow (trivial) or too broad (complex)? - [ ] **Proper Separation**: Multiple scenarios properly split into separate test cases? ### โ˜ Title and Description Quality - [ ] **Descriptive Title**: Title clearly communicates what is being tested? - [ ] **One Sentence Rule**: Title is concise but complete? - [ ] **Understandable**: Someone unfamiliar with feature can understand the objective? - [ ] **No Ambiguity**: Title has single, clear interpretation? ### โ˜ Preconditions Completeness - [ ] **Feature Configuration**: Split name and treatment specified? - [ ] **Environment Settings**: Device language, app location, platform specified? - [ ] **User State**: Login status, subscription level, user type defined? - [ ] **Data Requirements**: Required test data explicitly stated? - [ ] **System State**: App version, feature flags, backend configuration noted? ### โ˜ Step-by-Step Analysis - [ ] **Complete Flow**: Steps start from absolute beginning (app launch, login)? - [ ] **No Assumptions**: Each step explicitly states required actions? - [ ] **Logical Sequence**: Steps follow natural user journey? - [ ] **Actionable Instructions**: Each step has clear, executable action? - [ ] **No Missing Links**: No gaps between steps? ### โ˜ Expected Results Validation - [ ] **Template Consistency**: Correct template used (step-by-step vs. final result)? - [ ] **Clear Expectations**: Expected results are specific and measurable? - [ ] **Visual Support**: Images provided for complex UI validations? - [ ] **Annotation Quality**: Arrows/highlights clearly indicate validation points? - [ ] **Complete Coverage**: All important changes/states validated? ## Technical Quality Checkpoints ### โ˜ Automation Readiness - [ ] **Unambiguous Steps**: No interpretation required for any step? - [ ] **Technical Feasibility**: All actions can be automated? - [ ] **Data Availability**: Test data is obtainable programmatically? - [ ] **Environment Compatibility**: Works across target test environments? - [ ] **Stable Selectors**: UI elements can be reliably identified? ### โ˜ Platform Considerations - [ ] **Cross-Platform Awareness**: Platform-specific behaviors documented? - [ ] **Device Limitations**: Hardware/software constraints considered? - [ ] **Performance Implications**: Time-sensitive operations properly handled? - [ ] **Network Dependencies**: Connectivity requirements specified? - [ ] **Version Compatibility**: OS/app version requirements clear? ### โ˜ Data and Security - [ ] **Data Privacy**: Test data doesn't expose sensitive information? - [ ] **Data Cleanup**: Test execution doesn't leave persistent changes? - [ ] **Permission Requirements**: Required permissions explicitly stated? - [ ] **Authentication**: Login/security requirements properly handled? - [ ] **Test Data Management**: Clear data setup and teardown procedures? ## Quality Assurance Checkpoints ### โ˜ Language and Clarity - [ ] **Professional Language**: Proper grammar and spelling throughout? - [ ] **Consistent Terminology**: Same terms used consistently across test case? - [ ] **Clear Instructions**: No ambiguous phrases like "verify", "check", "ensure"? - [ ] **Specific Actions**: Actions use precise UI element descriptions? - [ ] **Measurable Results**: Expected results use quantifiable criteria? ### โ˜ Completeness Validation - [ ] **End-to-End Coverage**: Full user scenario covered from start to finish? - [ ] **Edge Cases**: Appropriate boundary conditions tested? - [ ] **Error Handling**: Invalid inputs and error scenarios considered? - [ ] **Recovery Paths**: User can recover from failures? - [ ] **Alternative Flows**: Multiple paths to same goal considered? ### โ˜ Maintenance Considerations - [ ] **Update Frequency**: How often will this test case need updates? - [ ] **Dependency Management**: External dependencies tracked and manageable? - [ ] **Documentation Links**: References remain valid over time? - [ ] **Version Control**: Changes tracked and reasoned? - [ ] **Knowledge Transfer**: Test case comprehensible to other team members? ## Red Flag Detection ### โ˜ Critical Issues - [ ] **No Red Flags**: Check for these critical problems: - Missing preconditions - Vague or ambiguous steps - Assumption-based instructions - Missing expected results - Multiple scenarios in one test case - Broken reference links - Platform-agnostic approach where platform matters - Unclear test data requirements ### โ˜ Automation Blockers - [ ] **No Automation Issues**: Check for automation-preventing problems: - Steps requiring human judgment - Non-deterministic outcomes - Manual verification requirements - External system dependencies not automatable - Time-dependent operations without proper waits ## Analysis Completion ### โ˜ Final Review - [ ] **Random Person Test**: Could a complete stranger execute this test case successfully? - [ ] **Value Assessment**: Does this test case add meaningful coverage? - [ ] **Duplication Check**: Is this test case unique in the test suite? - [ ] **Priority Alignment**: Does test case priority match its business importance? - [ ] **Ready for Implementation**: Test case is ready for manual execution and/or automation? ## Scoring Criteria **Excellent (90-100%)**: All checkpoints passed, ready for automation **Good (80-89%)**: Minor issues, ready for manual execution, automation with small fixes **Needs Improvement (60-79%)**: Several issues, requires revision before execution **Poor (<60%)**: Major issues, requires complete rewrite ## Post-Analysis Actions Based on checkpoint results: - **Pass**: Mark as reviewed and ready for execution - **Minor Issues**: Document specific improvements needed - **Major Issues**: Return to author with detailed feedback - **Automation Notes**: Document automation-specific considerations - **Follow-up Required**: Schedule review after improvements implemented

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/maksimsarychau/mcp-zebrunner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server