Skip to main content
Glama

get_pipeline_test_summary

Retrieve a lightweight test summary for GitLab merge requests, showing pass/fail counts per test suite to quickly check pipeline status.

Instructions

Get test summary for a merge request - a lightweight overview showing pass/fail counts per test suite. Faster than full test report. Great for quick status checks.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
merge_request_iidYesInternal ID of the merge request

Implementation Reference

  • Core handler function implementing the tool logic: retrieves MR pipeline, fetches test summary via GitLab API helpers, handles errors, and returns formatted Markdown text content with test stats and suite details.
    async def get_pipeline_test_summary(gitlab_url, project_id, access_token, args):
        """Get the test summary for a merge request's latest pipeline"""
        logging.info(f"get_pipeline_test_summary called with args: {args}")
        mr_iid = args["merge_request_iid"]
    
        # First, get the latest pipeline for this MR
        try:
            pipeline_status, pipeline_data, pipeline_error = await get_merge_request_pipeline(
                gitlab_url, project_id, access_token, mr_iid
            )
        except Exception as e:
            logging.error(f"Error fetching pipeline: {e}")
            raise Exception(f"Error fetching pipeline for MR: {e}")
    
        if pipeline_status != 200 or not pipeline_data:
            result = f"# 📊 Test Summary for Merge Request !{mr_iid}\n\n"
            result += "â„šī¸ No pipeline found for this merge request.\n\n"
            result += "Cannot fetch test summary without a pipeline.\n"
            return [TextContent(type="text", text=result)]
    
        pipeline_id = pipeline_data.get("id")
        logging.info(f"Fetching test summary for pipeline {pipeline_id}")
    
        # Now get the test summary for this pipeline
        try:
            status, summary_data, error = await get_pipeline_test_report_summary(
                gitlab_url, project_id, access_token, pipeline_id
            )
        except Exception as e:
            logging.error(f"Error fetching test summary: {e}")
            raise Exception(f"Error fetching test summary: {e}")
    
        if status != 200:
            logging.error(f"Error fetching test summary: {status} - {error}")
            if status == 404:
                result = f"# 📊 Test Summary for Merge Request !{mr_iid}\n\n"
                result += "â„šī¸ No test summary available for this pipeline.\n\n"
                result += "This could mean:\n"
                result += "â€ĸ No CI/CD pipeline has run tests\n"
                result += "â€ĸ Tests don't upload JUnit XML or similar reports\n"
                result += "â€ĸ The pipeline is configured but no test "
                result += "artifacts were generated\n\n"
                result += "**💡 Tip:** To generate test reports, your CI jobs "
                result += "need to:\n"
                result += "1. Run tests that output JUnit XML format\n"
                result += "2. Use `artifacts:reports:junit` in .gitlab-ci.yml\n"
                return [TextContent(type="text", text=result)]
            raise Exception(f"Error fetching test summary: {status} - {error}")
    
        # Format the test summary
        result = f"# 📊 Test Summary for Merge Request !{mr_iid}\n\n"
        result += f"**Pipeline**: #{pipeline_id}"
        if pipeline_data.get("web_url"):
            result += f" - [View Pipeline]({pipeline_data['web_url']})\n\n"
        else:
            result += "\n\n"
    
        # Get summary data
        total_time = summary_data.get("total", {}).get("time", 0)
        total_count = summary_data.get("total", {}).get("count", 0)
        success_count = summary_data.get("total", {}).get("success", 0)
        failed_count = summary_data.get("total", {}).get("failed", 0)
        skipped_count = summary_data.get("total", {}).get("skipped", 0)
        error_count = summary_data.get("total", {}).get("error", 0)
    
        # Summary
        result += "## 📋 Summary\n\n"
        result += f"**Total Tests**: {total_count}\n"
        result += f"**✅ Passed**: {success_count}\n"
        result += f"**❌ Failed**: {failed_count}\n"
        result += f"**âš ī¸ Errors**: {error_count}\n"
        result += f"**â­ī¸ Skipped**: {skipped_count}\n"
        result += f"**âąī¸ Total Time**: {total_time:.2f}s\n\n"
    
        if total_count == 0:
            result += "â„šī¸ No tests were found in the test summary.\n"
            return [TextContent(type="text", text=result)]
    
        # Pass rate
        if total_count > 0:
            pass_rate = (success_count / total_count) * 100
            if pass_rate == 100:
                result += f"**🎉 Pass Rate**: {pass_rate:.1f}% - "
                result += "All tests passed!\n\n"
            elif pass_rate >= 80:
                result += f"**✅ Pass Rate**: {pass_rate:.1f}%\n\n"
            elif pass_rate >= 50:
                result += f"**âš ī¸ Pass Rate**: {pass_rate:.1f}%\n\n"
            else:
                result += f"**❌ Pass Rate**: {pass_rate:.1f}%\n\n"
    
        # Test suites breakdown
        test_suites = summary_data.get("test_suites", [])
        if test_suites:
            result += "## đŸ“Ļ Test Suites\n\n"
            for suite in test_suites:
                suite_name = suite.get("name", "Unknown Suite")
                suite_total = suite.get("total_count", 0)
                suite_success = suite.get("success_count", 0)
                suite_failed = suite.get("failed_count", 0)
                suite_skipped = suite.get("skipped_count", 0)
                suite_error = suite.get("error_count", 0)
                suite_time = suite.get("total_time", 0)
    
                # Determine status icon
                if suite_failed == 0 and suite_error == 0:
                    status_icon = "✅"
                elif suite_failed > 0 or suite_error > 0:
                    status_icon = "❌"
                else:
                    status_icon = "âšĒ"
    
                result += f"### {status_icon} {suite_name}\n\n"
                result += f"- **Total**: {suite_total} tests\n"
                result += f"- **✅ Passed**: {suite_success}\n"
    
                if suite_failed > 0:
                    result += f"- **❌ Failed**: {suite_failed}\n"
    
                if suite_error > 0:
                    result += f"- **âš ī¸ Errors**: {suite_error}\n"
    
                if suite_skipped > 0:
                    result += f"- **â­ī¸ Skipped**: {suite_skipped}\n"
    
                result += f"- **âąī¸ Duration**: {suite_time:.2f}s\n\n"
    
        # Add helpful tips if there are failures
        if failed_count > 0 or error_count > 0:
            result += "## 💡 Next Steps\n\n"
            result += "1. Use `get_merge_request_test_report` to see "
            result += "detailed error messages\n"
            result += "2. Check specific failed test names and stack traces\n"
            result += "3. Use `get_job_log` to see full CI output if needed\n"
    
        return [TextContent(type="text", text=result)]
  • Tool schema definition including name, description, and inputSchema requiring 'merge_request_iid' as integer.
    Tool(
        name="get_pipeline_test_summary",
        description=(
            "Get test summary for a merge request - a "
            "lightweight overview showing pass/fail counts "
            "per test suite. Faster than full test report. "
            "Great for quick status checks."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "merge_request_iid": {
                    "type": "integer",
                    "minimum": 1,
                    "description": ("Internal ID of the merge request"),
                }
            },
            "required": ["merge_request_iid"],
            "additionalProperties": False,
        },
  • main.py:320-323 (registration)
    Dispatch/registration in the call_tool handler that routes requests for this tool name to the handler function, passing config and arguments.
    elif name == "get_pipeline_test_summary":
        return await get_pipeline_test_summary(
            self.config["gitlab_url"], self.config["project_id"], self.config["access_token"], arguments
        )
  • Import and re-export of the handler function in the tools package __init__ for easy access.
    from .get_pipeline_test_summary import get_pipeline_test_summary
    from .list_merge_requests import list_merge_requests
    from .reply_to_review_comment import create_review_comment, reply_to_review_comment, resolve_review_discussion
    
    __all__ = [
        "list_merge_requests",
        "get_merge_request_reviews",
        "get_merge_request_details",
        "get_merge_request_pipeline",
        "get_merge_request_test_report",
        "get_pipeline_test_summary",
  • main.py:23-23 (registration)
    Import of the tool handler from tools package into main.py.
    get_pipeline_test_summary,
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it's a read operation ('Get'), provides a 'lightweight overview' with 'pass/fail counts per test suite', and is performance-optimized ('Faster than full test report'). It doesn't mention rate limits or authentication needs, but covers core functionality adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: three sentences that each earn their place by defining purpose, differentiating from alternatives, and providing usage context with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with one parameter and no output schema, the description is nearly complete: it explains what the tool returns ('pass/fail counts per test suite') and its performance characteristics. It could mention the return format more explicitly, but given the low complexity, it's sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single parameter. The description adds no additional parameter semantics beyond what's in the schema, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get test summary') and resource ('for a merge request'), distinguishing it from siblings like 'get_merge_request_test_report' by emphasizing it's a 'lightweight overview' and 'faster than full test report'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly provides usage guidance by stating when to use this tool ('Great for quick status checks') and when to use an alternative ('Faster than full test report'), clearly differentiating it from 'get_merge_request_test_report' without needing to name it directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/amirsina-mandegari/gitlab-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server