testcase.lint
Analyzes test cases to calculate quality scores and provide improvement recommendations for better standardization.
Instructions
Test case'i analiz eder, kalite skoru ve iyileştirme önerileri döner
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| testcase | Yes | Analiz edilecek test case | |
| include_improvement_plan | No | Öncelikli iyileştirme planı dahil mi (default: true) | |
| strict_mode | No | Daha katı kurallar uygula (default: false) |
Implementation Reference
- src/qa_mcp/tools/lint.py:12-97 (handler)Core handler function that executes the linting logic for a single test case. Parses input, runs LintEngine, computes score, identifies issues, and generates improvement suggestions.def lint_testcase( testcase: dict, include_improvement_plan: bool = True, strict_mode: bool = False, ) -> dict: """ Lint a test case and return quality analysis. Args: testcase: Test case dictionary to analyze include_improvement_plan: Whether to include prioritized improvement plan strict_mode: If True, applies stricter validation rules Returns: Dictionary containing: - score: Quality score (0-100) - grade: Letter grade (A-F) - passed: Whether it meets minimum threshold - issues: List of found issues - suggestions: General improvement suggestions - improvement_plan: Prioritized actions (if requested) """ # Initialize engine standard = TestCaseStandard.get_default() if strict_mode: standard.minimum_score = 75 # Higher threshold in strict mode engine = LintEngine(standard) # Parse test case try: tc = TestCase(**testcase) except Exception as e: return { "score": 0, "grade": "F", "passed": False, "issues": [ { "severity": "error", "field": "structure", "rule": "valid_structure", "message": f"Test case yapısı geçersiz: {str(e)}", "suggestion": "Test case'in gerekli alanları içerdiğinden emin olun", } ], "suggestions": ["Test case yapısını QA-MCP standardına göre düzeltin"], "improvement_plan": [], "error": str(e), } # Run lint result: LintResult = engine.lint(tc) # Build response response = { "score": result.score, "grade": result.grade, "passed": result.passed, "issues": [ { "severity": issue.severity.value, "field": issue.field, "rule": issue.rule, "message": issue.message, "suggestion": issue.suggestion, } for issue in result.issues ], "suggestions": result.suggestions, } # Add improvement plan if requested if include_improvement_plan: response["improvement_plan"] = engine.get_improvement_plan(result) # Add summary statistics response["summary"] = { "total_issues": len(result.issues), "errors": sum(1 for i in result.issues if i.severity.value == "error"), "warnings": sum(1 for i in result.issues if i.severity.value == "warning"), "info": sum(1 for i in result.issues if i.severity.value == "info"), "minimum_score": standard.minimum_score, } return response
- src/qa_mcp/server.py:124-145 (registration)Registers the 'testcase.lint' tool with the MCP server in the list_tools() function, specifying name, description, and input schema.Tool( name="testcase.lint", description="Test case'i analiz eder, kalite skoru ve iyileştirme önerileri döner", inputSchema={ "type": "object", "properties": { "testcase": { "type": "object", "description": "Analiz edilecek test case", }, "include_improvement_plan": { "type": "boolean", "description": "Öncelikli iyileştirme planı dahil mi (default: true)", }, "strict_mode": { "type": "boolean", "description": "Daha katı kurallar uygula (default: false)", }, }, "required": ["testcase"], }, ),
- src/qa_mcp/server.py:127-144 (schema)JSON Schema defining the input parameters for the 'testcase.lint' tool: required testcase object and optional flags for improvement plan and strict mode.inputSchema={ "type": "object", "properties": { "testcase": { "type": "object", "description": "Analiz edilecek test case", }, "include_improvement_plan": { "type": "boolean", "description": "Öncelikli iyileştirme planı dahil mi (default: true)", }, "strict_mode": { "type": "boolean", "description": "Daha katı kurallar uygula (default: false)", }, }, "required": ["testcase"], },
- src/qa_mcp/server.py:323-329 (handler)Dispatch handler in the server's call_tool() function that invokes the lint_testcase implementation for 'testcase.lint' requests.elif name == "testcase.lint": result = lint_testcase( testcase=arguments["testcase"], include_improvement_plan=arguments.get("include_improvement_plan", True), strict_mode=arguments.get("strict_mode", False), ) audit_log(name, arguments, f"Lint score: {result.get('score', 0)}")