Skip to main content
Glama

validate_test_case

Validate test cases against quality standards and best practices with dynamic rules support and automated improvement suggestions.

Instructions

🔍 Validate a test case against quality standards and best practices (Dynamic Rules Support + Improvement)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectKeyYesProject key (e.g., 'android' or 'ANDROID')
caseKeyYesTest case key (e.g., 'ANDROID-29')
rulesFilePathNoPath to custom rules markdown file
checkpointsFilePathNoPath to custom checkpoints markdown file
formatNoOutput formatmarkdown
improveIfPossibleNoAttempt to automatically improve the test case
include_clickable_linksNoInclude clickable links to Zebrunner web UI

Implementation Reference

  • Main MCP tool handler: fetches Zebrunner test case by key, initializes TestCaseValidator (with optional custom rules files), runs validation, optionally generates improvements via TestCaseImprover, formats output (JSON/markdown/string), returns MCP content response.
    async validateTestCase(input: z.infer<typeof ValidateTestCaseInputSchema>) {
      const { projectKey, caseKey, rulesFilePath, checkpointsFilePath, format, improveIfPossible } = input;
      
      try {
        // Get the test case data first
        const testCase = await this.client.getTestCaseByKey(projectKey, caseKey);
        
        // Initialize validator with dynamic rules
        let validator: TestCaseValidator;
        if (rulesFilePath && checkpointsFilePath) {
          // Use custom rules from files - validate paths first
          try {
            const resolvedRulesPath = validateFilePath(rulesFilePath, process.cwd());
            const resolvedCheckpointsPath = validateFilePath(checkpointsFilePath, process.cwd());
            validator = await TestCaseValidator.fromMarkdownFiles(resolvedRulesPath, resolvedCheckpointsPath);
          } catch (error) {
            throw new Error(`Invalid file path provided: ${error instanceof Error ? error.message : error}`);
          }
        } else {
          // Use default rules, but try to load from standard files if they exist
          const defaultRulesPath = path.resolve(process.cwd(), 'test_case_review_rules.md');
          const defaultCheckpointsPath = path.resolve(process.cwd(), 'test_case_analysis_checkpoints.md');
          
          try {
            validator = await TestCaseValidator.fromMarkdownFiles(defaultRulesPath, defaultCheckpointsPath);
          } catch (error) {
            // Fall back to default rules if files don't exist
            validator = new TestCaseValidator();
          }
        }
        
        // Validate the test case
        const validationResult = await validator.validateTestCase(testCase);
        
        // Attempt improvement if requested
        let improvementResult = null;
        if (improveIfPossible) {
          const improver = new TestCaseImprover();
          improvementResult = await improver.improveTestCase(testCase, validationResult);
        }
        
        // Format the result based on requested format
        let formattedResult: string;
        
        if (format === 'markdown') {
          formattedResult = this.formatValidationResultAsMarkdown(validationResult, improvementResult);
        } else if (format === 'string') {
          formattedResult = this.formatValidationResultAsString(validationResult, improvementResult);
        } else {
          const result = improvementResult 
            ? { validation: validationResult, improvement: improvementResult }
            : validationResult;
          formattedResult = JSON.stringify(result, null, 2);
        }
        
        return {
          content: [
            {
              type: "text" as const,
              text: formattedResult
            }
          ]
        };
      } catch (error: any) {
        const errorMsg = sanitizeErrorMessage(error, 'Error validating test case', 'validateTestCase');
        return {
          content: [
            {
              type: "text" as const,
              text: errorMsg
            }
          ]
        };
      }
    }
  • Zod input schema for validate_test_case tool defining required projectKey/caseKey and optional custom rules paths, output format, improvement flag.
    export const ValidateTestCaseInputSchema = z.object({
      projectKey: z.string().min(1),
      caseKey: z.string().min(1),
      rulesFilePath: z.string().optional(),
      checkpointsFilePath: z.string().optional(),
      format: z.enum(['dto', 'json', 'string', 'markdown']).default('json'),
      improveIfPossible: z.boolean().default(true)
    });
  • Core helper implementing test case validation logic: iterates over dynamic rule set, executes specific validators (title, steps, preconditions, etc.), calculates score, categorizes readiness for automation/manual, generates issues/summary.
    async validateTestCase(testCase: ZebrunnerTestCase): Promise<ValidationResult> {
      const issues: ValidationIssue[] = [];
      const passedCheckpoints: string[] = [];
    
      // Run validation based on enabled rules
      for (const rule of this.ruleSet.rules.filter(r => r.enabled)) {
        const validationFunction = this.validationFunctions.get(rule.checkFunction);
        if (validationFunction) {
          try {
            const result = validationFunction(testCase, rule);
            if (result.passed) {
              passedCheckpoints.push(rule.name);
            } else {
              issues.push({
                category: rule.category,
                severity: rule.severity,
                checkpoint: rule.name,
                description: result.message || rule.description,
                suggestion: result.suggestion || rule.suggestion,
                ruleId: rule.id
              });
            }
          } catch (error) {
            console.warn(`Error executing validation rule ${rule.id}: ${error}`);
          }
        }
      }
    
      // Calculate score
      const totalCheckpoints = issues.length + passedCheckpoints.length;
      const score = totalCheckpoints > 0 ? Math.round((passedCheckpoints.length / totalCheckpoints) * 100) : 0;
      
      const scoreCategory = this.getScoreCategory(score);
      const readyForAutomation = score >= this.ruleSet.scoreThresholds.good && !this.hasAutomationBlockers(issues);
      const readyForManualExecution = score >= this.ruleSet.scoreThresholds.needs_improvement;
    
      // Extract automation status, priority, and status
      const automationStatus = testCase.automationState?.name || 'Unknown';
      const priority = testCase.priority?.name || undefined;
      
      // Construct status from available boolean fields
      let status: string | undefined;
      if (testCase.draft) {
        status = 'Draft';
      } else if (testCase.deprecated) {
        status = 'Deprecated';
      } else {
        // If neither draft nor deprecated, it's likely active
        status = 'Active';
      }
      
      // Extract Manual Only from custom fields
      const manualOnly = testCase.customField?.manualOnly || undefined;
    
      return {
        testCaseKey: testCase.key || 'Unknown',
        testCaseTitle: testCase.title || 'Untitled',
        automationStatus,
        priority,
        status,
        manualOnly,
        overallScore: score,
        scoreCategory,
        issues,
        passedCheckpoints,
        summary: this.generateSummary(score, issues, readyForAutomation, readyForManualExecution, automationStatus),
        readyForAutomation,
        readyForManualExecution,
        rulesUsed: `${this.ruleSet.name} v${this.ruleSet.version}`
      };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions 'Dynamic Rules Support + Improvement' which hints at configurable rules and potential auto-improvement, but doesn't disclose critical details like whether this is a read-only analysis or makes changes, what permissions are required, error handling, or rate limits. The 'improveIfPossible' parameter suggests mutation capability, but this isn't explicitly stated in the description.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that conveys the core purpose. The emoji adds visual distinction without being distracting. However, the parenthetical '(Dynamic Rules Support + Improvement)' could be integrated more smoothly, and the description lacks any structural separation of key concepts.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter tool with no annotations and no output schema, the description is inadequate. It doesn't explain what the validation output looks like, what 'quality standards and best practices' entail, how 'improvement' manifests, or the consequences of validation. The agent must rely entirely on parameter names and schema descriptions to understand this tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds no parameter-specific information beyond what's in the schema. It mentions 'Dynamic Rules Support' which loosely relates to 'rulesFilePath' and 'checkpointsFilePath', but doesn't explain their purpose or format. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('validate') and target ('test case') with additional context about quality standards and best practices. It distinguishes from siblings by mentioning 'Dynamic Rules Support + Improvement', which suggests a specific validation approach not present in tools like 'improve_test_case' or 'get_test_case_by_key'. However, it doesn't explicitly differentiate from all sibling tools that might involve test case analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when validation is needed, what triggers it, or how it differs from similar tools like 'improve_test_case' or 'get_enhanced_test_coverage_with_rules'. The agent must infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/maksimsarychau/mcp-zebrunner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server