Skip to main content
Glama
Atakan-Emre

QA-MCP: Test Standardization & Orchestration Server

by Atakan-Emre

suite.coverage_report

Generates test coverage reports by analyzing test cases against requirements and modules to identify gaps in QA testing.

Instructions

Test suite için kapsam raporu oluşturur

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
testcasesYesTest case listesi
requirementsNoKontrol edilecek gereksinim ID'leri
modulesNoKontrol edilecek modül listesi

Implementation Reference

  • The core handler function that executes the suite.coverage_report tool logic. Parses test cases, computes coverage for requirements, modules, risks, scenarios, identifies gaps, and generates recommendations.
    def coverage_report(
        testcases: list[dict],
        requirements: list[str] | None = None,
        modules: list[str] | None = None,
    ) -> dict:
        """
        Generate a coverage report for test cases.
    
        Args:
            testcases: List of test cases in QA-MCP standard format
            requirements: Optional list of requirement IDs to check coverage
            modules: Optional list of module names to check coverage
    
        Returns:
            Dictionary containing:
            - requirement_coverage: Coverage by requirements
            - module_coverage: Coverage by modules
            - risk_coverage: Coverage by risk levels
            - gaps: Identified coverage gaps
            - recommendations: Suggestions for improving coverage
        """
        # Parse test cases
        parsed_cases = []
        for tc_dict in testcases:
            try:
                tc = TestCase(**tc_dict)
                parsed_cases.append(tc)
            except Exception:
                continue
    
        # Requirement coverage
        req_coverage = {}
        all_covered_reqs = set()
    
        for tc in parsed_cases:
            for req in tc.requirements:
                all_covered_reqs.add(req)
                if req not in req_coverage:
                    req_coverage[req] = []
                req_coverage[req].append(tc.id or tc.title)
    
        # Check against provided requirements
        req_analysis = None
        if requirements:
            covered = set(requirements) & all_covered_reqs
            uncovered = set(requirements) - all_covered_reqs
            req_analysis = {
                "total_requirements": len(requirements),
                "covered": len(covered),
                "uncovered": len(uncovered),
                "coverage_percent": round(len(covered) / len(requirements) * 100, 1)
                if requirements
                else 0,
                "uncovered_list": list(uncovered),
                "covered_list": list(covered),
            }
    
        # Module coverage
        all_tc_modules = set(tc.module for tc in parsed_cases if tc.module)
        module_test_count = {}
        for tc in parsed_cases:
            if tc.module:
                module_test_count[tc.module] = module_test_count.get(tc.module, 0) + 1
    
        module_analysis = None
        if modules:
            covered = set(modules) & all_tc_modules
            uncovered = set(modules) - all_tc_modules
            module_analysis = {
                "total_modules": len(modules),
                "covered": len(covered),
                "uncovered": len(uncovered),
                "coverage_percent": round(len(covered) / len(modules) * 100, 1) if modules else 0,
                "uncovered_list": list(uncovered),
                "test_count_per_module": {m: module_test_count.get(m, 0) for m in modules},
            }
    
        # Risk coverage analysis
        risk_distribution = {
            "critical": 0,
            "high": 0,
            "medium": 0,
            "low": 0,
        }
        for tc in parsed_cases:
            if tc.risk_level:
                risk_distribution[tc.risk_level.value] += 1
    
        # Scenario type analysis
        scenario_distribution = {
            "positive": 0,
            "negative": 0,
            "boundary": 0,
            "edge_case": 0,
            "error_handling": 0,
        }
        for tc in parsed_cases:
            if tc.scenario_type:
                scenario_distribution[tc.scenario_type.value] = (
                    scenario_distribution.get(tc.scenario_type.value, 0) + 1
                )
    
        # Identify gaps
        gaps = []
    
        if scenario_distribution["negative"] < scenario_distribution["positive"] * 0.3:
            gaps.append(
                {
                    "type": "scenario_balance",
                    "description": "Negatif senaryolar yetersiz",
                    "current": scenario_distribution["negative"],
                    "recommended": int(scenario_distribution["positive"] * 0.3),
                }
            )
    
        if risk_distribution["critical"] == 0 and risk_distribution["high"] == 0:
            gaps.append(
                {
                    "type": "risk_coverage",
                    "description": "Kritik/Yüksek riskli test yok",
                    "recommendation": "Kritik iş akışları için yüksek öncelikli test ekleyin",
                }
            )
    
        if req_analysis and req_analysis["uncovered"]:
            gaps.append(
                {
                    "type": "requirement_coverage",
                    "description": f"{len(req_analysis['uncovered_list'])} gereksinim karşılanmamış",
                    "uncovered": req_analysis["uncovered_list"][:5],  # First 5
                }
            )
    
        # Recommendations
        recommendations = []
    
        total_tests = len(parsed_cases)
        if total_tests < 10:
            recommendations.append(
                "Test sayısı az. Daha kapsamlı test coverage için ek test case'ler oluşturun."
            )
    
        if scenario_distribution["boundary"] == 0:
            recommendations.append(
                "Boundary test'ler yok. Sınır değer analizi ile test case'ler ekleyin."
            )
    
        if not any(tc.test_data for tc in parsed_cases):
            recommendations.append(
                "Test data tanımlı değil. Data-driven testing yaklaşımını uygulayın."
            )
    
        return {
            "total_testcases": total_tests,
            "requirement_coverage": req_analysis,
            "requirement_mapping": req_coverage,
            "module_coverage": module_analysis,
            "module_test_count": module_test_count,
            "risk_distribution": risk_distribution,
            "scenario_distribution": scenario_distribution,
            "gaps": gaps,
            "recommendations": recommendations,
        }
  • Registration of the 'suite.coverage_report' tool in the MCP server's list_tools() function, including name, description, and input schema.
    Tool(
        name="suite.coverage_report",
        description="Test suite için kapsam raporu oluşturur",
        inputSchema={
            "type": "object",
            "properties": {
                "testcases": {
                    "type": "array",
                    "items": {"type": "object"},
                    "description": "Test case listesi",
                },
                "requirements": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "Kontrol edilecek gereksinim ID'leri",
                },
                "modules": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "Kontrol edilecek modül listesi",
                },
            },
            "required": ["testcases"],
        },
    ),
  • Dispatch handler in the MCP server's call_tool() function that invokes the coverage_report implementation with parsed arguments.
    elif name == "suite.coverage_report":
        result = coverage_report(
            testcases=arguments["testcases"],
            requirements=arguments.get("requirements"),
            modules=arguments.get("modules"),
        )
        audit_log(
            name, arguments, f"Coverage report for {result.get('total_testcases', 0)} tests"
        )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'creates' a coverage report, implying a generation or output operation, but doesn't specify what the output looks like (e.g., format, structure), whether it's read-only or modifies data, or any performance considerations like rate limits. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence in Turkish that directly states the tool's purpose without unnecessary words. It's appropriately sized for a simple tool, though it could be more front-loaded with key details if expanded. There's no wasted language, making it concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations, no output schema, and 3 parameters, the description is incomplete. It doesn't explain the output (e.g., report format), behavioral traits like side effects, or how to interpret results. For a tool that generates reports, more context is needed to guide effective use, especially without structured data to compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with clear descriptions for all three parameters (testcases, requirements, modules). The description adds no additional meaning beyond the schema, such as explaining how parameters interact (e.g., how testcases relate to requirements/modules) or providing examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Test suite için kapsam raporu oluşturur' (Creates a coverage report for the test suite), which clearly identifies the verb ('oluşturur' - creates) and resource ('kapsam raporu' - coverage report). However, it doesn't distinguish this tool from its siblings like 'suite.compose' or 'testcase.generate', leaving the specific scope ambiguous. The purpose is understandable but lacks sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing test cases as input, or compare it to sibling tools like 'testcase.to_xray' for reporting. Without any context or exclusions, the agent must infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Atakan-Emre/McpTestGenerator'

If you have feedback or need assistance with the MCP directory API, please join our Discord server