Skip to main content
Glama
Atakan-Emre

QA-MCP: Test Standardization & Orchestration Server

by Atakan-Emre

suite.coverage_report

Generates test coverage reports by analyzing test cases against requirements and modules to identify gaps in QA testing.

Instructions

Test suite için kapsam raporu oluşturur

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
testcasesYesTest case listesi
requirementsNoKontrol edilecek gereksinim ID'leri
modulesNoKontrol edilecek modül listesi

Implementation Reference

  • The core handler function that executes the suite.coverage_report tool logic. Parses test cases, computes coverage for requirements, modules, risks, scenarios, identifies gaps, and generates recommendations.
    def coverage_report(
        testcases: list[dict],
        requirements: list[str] | None = None,
        modules: list[str] | None = None,
    ) -> dict:
        """
        Generate a coverage report for test cases.
    
        Args:
            testcases: List of test cases in QA-MCP standard format
            requirements: Optional list of requirement IDs to check coverage
            modules: Optional list of module names to check coverage
    
        Returns:
            Dictionary containing:
            - requirement_coverage: Coverage by requirements
            - module_coverage: Coverage by modules
            - risk_coverage: Coverage by risk levels
            - gaps: Identified coverage gaps
            - recommendations: Suggestions for improving coverage
        """
        # Parse test cases
        parsed_cases = []
        for tc_dict in testcases:
            try:
                tc = TestCase(**tc_dict)
                parsed_cases.append(tc)
            except Exception:
                continue
    
        # Requirement coverage
        req_coverage = {}
        all_covered_reqs = set()
    
        for tc in parsed_cases:
            for req in tc.requirements:
                all_covered_reqs.add(req)
                if req not in req_coverage:
                    req_coverage[req] = []
                req_coverage[req].append(tc.id or tc.title)
    
        # Check against provided requirements
        req_analysis = None
        if requirements:
            covered = set(requirements) & all_covered_reqs
            uncovered = set(requirements) - all_covered_reqs
            req_analysis = {
                "total_requirements": len(requirements),
                "covered": len(covered),
                "uncovered": len(uncovered),
                "coverage_percent": round(len(covered) / len(requirements) * 100, 1)
                if requirements
                else 0,
                "uncovered_list": list(uncovered),
                "covered_list": list(covered),
            }
    
        # Module coverage
        all_tc_modules = set(tc.module for tc in parsed_cases if tc.module)
        module_test_count = {}
        for tc in parsed_cases:
            if tc.module:
                module_test_count[tc.module] = module_test_count.get(tc.module, 0) + 1
    
        module_analysis = None
        if modules:
            covered = set(modules) & all_tc_modules
            uncovered = set(modules) - all_tc_modules
            module_analysis = {
                "total_modules": len(modules),
                "covered": len(covered),
                "uncovered": len(uncovered),
                "coverage_percent": round(len(covered) / len(modules) * 100, 1) if modules else 0,
                "uncovered_list": list(uncovered),
                "test_count_per_module": {m: module_test_count.get(m, 0) for m in modules},
            }
    
        # Risk coverage analysis
        risk_distribution = {
            "critical": 0,
            "high": 0,
            "medium": 0,
            "low": 0,
        }
        for tc in parsed_cases:
            if tc.risk_level:
                risk_distribution[tc.risk_level.value] += 1
    
        # Scenario type analysis
        scenario_distribution = {
            "positive": 0,
            "negative": 0,
            "boundary": 0,
            "edge_case": 0,
            "error_handling": 0,
        }
        for tc in parsed_cases:
            if tc.scenario_type:
                scenario_distribution[tc.scenario_type.value] = (
                    scenario_distribution.get(tc.scenario_type.value, 0) + 1
                )
    
        # Identify gaps
        gaps = []
    
        if scenario_distribution["negative"] < scenario_distribution["positive"] * 0.3:
            gaps.append(
                {
                    "type": "scenario_balance",
                    "description": "Negatif senaryolar yetersiz",
                    "current": scenario_distribution["negative"],
                    "recommended": int(scenario_distribution["positive"] * 0.3),
                }
            )
    
        if risk_distribution["critical"] == 0 and risk_distribution["high"] == 0:
            gaps.append(
                {
                    "type": "risk_coverage",
                    "description": "Kritik/Yüksek riskli test yok",
                    "recommendation": "Kritik iş akışları için yüksek öncelikli test ekleyin",
                }
            )
    
        if req_analysis and req_analysis["uncovered"]:
            gaps.append(
                {
                    "type": "requirement_coverage",
                    "description": f"{len(req_analysis['uncovered_list'])} gereksinim karşılanmamış",
                    "uncovered": req_analysis["uncovered_list"][:5],  # First 5
                }
            )
    
        # Recommendations
        recommendations = []
    
        total_tests = len(parsed_cases)
        if total_tests < 10:
            recommendations.append(
                "Test sayısı az. Daha kapsamlı test coverage için ek test case'ler oluşturun."
            )
    
        if scenario_distribution["boundary"] == 0:
            recommendations.append(
                "Boundary test'ler yok. Sınır değer analizi ile test case'ler ekleyin."
            )
    
        if not any(tc.test_data for tc in parsed_cases):
            recommendations.append(
                "Test data tanımlı değil. Data-driven testing yaklaşımını uygulayın."
            )
    
        return {
            "total_testcases": total_tests,
            "requirement_coverage": req_analysis,
            "requirement_mapping": req_coverage,
            "module_coverage": module_analysis,
            "module_test_count": module_test_count,
            "risk_distribution": risk_distribution,
            "scenario_distribution": scenario_distribution,
            "gaps": gaps,
            "recommendations": recommendations,
        }
  • Registration of the 'suite.coverage_report' tool in the MCP server's list_tools() function, including name, description, and input schema.
    Tool(
        name="suite.coverage_report",
        description="Test suite için kapsam raporu oluşturur",
        inputSchema={
            "type": "object",
            "properties": {
                "testcases": {
                    "type": "array",
                    "items": {"type": "object"},
                    "description": "Test case listesi",
                },
                "requirements": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "Kontrol edilecek gereksinim ID'leri",
                },
                "modules": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "Kontrol edilecek modül listesi",
                },
            },
            "required": ["testcases"],
        },
    ),
  • Dispatch handler in the MCP server's call_tool() function that invokes the coverage_report implementation with parsed arguments.
    elif name == "suite.coverage_report":
        result = coverage_report(
            testcases=arguments["testcases"],
            requirements=arguments.get("requirements"),
            modules=arguments.get("modules"),
        )
        audit_log(
            name, arguments, f"Coverage report for {result.get('total_testcases', 0)} tests"
        )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Atakan-Emre/McpTestGenerator'

If you have feedback or need assistance with the MCP directory API, please join our Discord server