Skip to main content
Glama
Atakan-Emre

QA-MCP: Test Standardization & Orchestration Server

by Atakan-Emre

testcase.generate

Generates standardized test cases from feature descriptions and acceptance criteria to ensure comprehensive testing coverage.

Instructions

Feature açıklaması ve acceptance criteria'dan standart test case üretir

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
featureYesFeature açıklaması
acceptance_criteriaYesKabul kriterleri listesi
moduleNoModül/bileşen adı (opsiyonel)
risk_levelNoRisk seviyesi (default: medium)
include_negativeNoNegatif senaryolar dahil mi (default: true)
include_boundaryNoBoundary test önerileri dahil mi (default: true)

Implementation Reference

  • Core handler function implementing the testcase.generate tool logic. Generates positive test cases from acceptance criteria, optionally negative scenarios and boundary suggestions, using helper functions.
    def generate_testcase(
        feature: str,
        acceptance_criteria: list[str],
        module: str | None = None,
        risk_level: str = "medium",
        include_negative: bool = True,
        include_boundary: bool = True,
        test_type: str = "Manual",
        author: str | None = None,
    ) -> dict:
        """
        Generate standardized test cases from feature description.
    
        Args:
            feature: Feature description
            acceptance_criteria: List of acceptance criteria
            module: Module/component name
            risk_level: Risk level (low, medium, high, critical)
            include_negative: Whether to include negative scenarios
            include_boundary: Whether to include boundary test suggestions
            test_type: Type of test (Manual, Automated, Generic)
            author: Test case author
    
        Returns:
            Dictionary containing:
            - testcases: List of generated test cases
            - suggestions: Additional suggestions
            - coverage_summary: What scenarios are covered
        """
        testcases = []
        suggestions = []
        coverage = {
            "positive_scenarios": 0,
            "negative_scenarios": 0,
            "boundary_tests": 0,
            "acceptance_criteria_covered": [],
        }
    
        # Map risk level
        risk = RiskLevel(risk_level.lower())
    
        # Determine priority based on risk
        priority_map = {
            RiskLevel.CRITICAL: Priority.P0,
            RiskLevel.HIGH: Priority.P1,
            RiskLevel.MEDIUM: Priority.P2,
            RiskLevel.LOW: Priority.P3,
        }
        priority = priority_map.get(risk, Priority.P2)
    
        # Generate positive test cases for each acceptance criterion
        for idx, criterion in enumerate(acceptance_criteria, 1):
            tc = _generate_positive_testcase(
                feature=feature,
                criterion=criterion,
                criterion_number=idx,
                module=module,
                risk=risk,
                priority=priority,
                test_type=test_type,
                author=author,
            )
            testcases.append(tc)
            coverage["positive_scenarios"] += 1
            coverage["acceptance_criteria_covered"].append(f"AC-{idx}")
    
        # Generate negative test cases if requested
        if include_negative:
            negative_cases = _generate_negative_testcases(
                feature=feature,
                acceptance_criteria=acceptance_criteria,
                module=module,
                risk=risk,
                author=author,
            )
            testcases.extend(negative_cases)
            coverage["negative_scenarios"] = len(negative_cases)
    
        # Generate boundary test suggestions
        if include_boundary:
            boundary_suggestions = _generate_boundary_suggestions(
                feature=feature,
                acceptance_criteria=acceptance_criteria,
            )
            if boundary_suggestions:
                suggestions.extend(boundary_suggestions)
                coverage["boundary_tests"] = len(boundary_suggestions)
    
        # Additional suggestions based on feature analysis
        suggestions.extend(_analyze_feature_suggestions(feature, acceptance_criteria))
    
        return {
            "testcases": [tc.model_dump() for tc in testcases],
            "suggestions": suggestions,
            "coverage_summary": coverage,
            "total_generated": len(testcases),
        }
  • Registration of the testcase.generate tool in list_tools(), including name, description, and input schema.
        name="testcase.generate",
        description="Feature açıklaması ve acceptance criteria'dan standart test case üretir",
        inputSchema={
            "type": "object",
            "properties": {
                "feature": {
                    "type": "string",
                    "description": "Feature açıklaması",
                },
                "acceptance_criteria": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "Kabul kriterleri listesi",
                },
                "module": {
                    "type": "string",
                    "description": "Modül/bileşen adı (opsiyonel)",
                },
                "risk_level": {
                    "type": "string",
                    "enum": ["low", "medium", "high", "critical"],
                    "description": "Risk seviyesi (default: medium)",
                },
                "include_negative": {
                    "type": "boolean",
                    "description": "Negatif senaryolar dahil mi (default: true)",
                },
                "include_boundary": {
                    "type": "boolean",
                    "description": "Boundary test önerileri dahil mi (default: true)",
                },
            },
            "required": ["feature", "acceptance_criteria"],
        },
    ),
  • Dispatcher in call_tool() that invokes the generate_testcase function with parsed arguments and handles audit logging.
    if name == "testcase.generate":
        result = generate_testcase(
            feature=arguments["feature"],
            acceptance_criteria=arguments["acceptance_criteria"],
            module=arguments.get("module"),
            risk_level=arguments.get("risk_level", "medium"),
            include_negative=arguments.get("include_negative", True),
            include_boundary=arguments.get("include_boundary", True),
        )
        audit_log(name, arguments, f"Generated {result.get('total_generated', 0)} test cases")
  • Input schema definition for the testcase.generate tool, defining parameters, types, descriptions, and required fields.
    inputSchema={
        "type": "object",
        "properties": {
            "feature": {
                "type": "string",
                "description": "Feature açıklaması",
            },
            "acceptance_criteria": {
                "type": "array",
                "items": {"type": "string"},
                "description": "Kabul kriterleri listesi",
            },
            "module": {
                "type": "string",
                "description": "Modül/bileşen adı (opsiyonel)",
            },
            "risk_level": {
                "type": "string",
                "enum": ["low", "medium", "high", "critical"],
                "description": "Risk seviyesi (default: medium)",
            },
            "include_negative": {
                "type": "boolean",
                "description": "Negatif senaryolar dahil mi (default: true)",
            },
            "include_boundary": {
                "type": "boolean",
                "description": "Boundary test önerileri dahil mi (default: true)",
            },
        },
        "required": ["feature", "acceptance_criteria"],
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but lacks critical details: whether it's a read-only or mutation operation, what format the generated test cases take, if there are rate limits, or any error conditions. This leaves significant gaps for an agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence in Turkish that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, no annotations, and no output schema, the description is insufficient. It doesn't explain what 'standard test cases' means, the output format, or behavioral traits like whether it's idempotent or has side effects. Given the complexity and lack of structured data, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so all parameters are documented in the input schema. The description adds no additional parameter semantics beyond implying the tool uses 'feature description and acceptance criteria' as inputs, which aligns with the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'generates standard test cases from feature description and acceptance criteria.' It specifies the verb ('generates') and resource ('test cases'), but doesn't differentiate from sibling tools like testcase.lint or testcase.normalize, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like testcase.lint or suite.compose. It mentions the input sources (feature description and acceptance criteria) but offers no context about appropriate scenarios or exclusions, leaving usage decisions unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Atakan-Emre/McpTestGenerator'

If you have feedback or need assistance with the MCP directory API, please join our Discord server