Skip to main content
Glama

get_merge_request_test_report

Retrieve structured test reports for GitLab merge requests to identify test failures, error messages, and stack traces for debugging.

Instructions

Get structured test report for a merge request with specific test failures, error messages, and stack traces. Shows the same test data visible on the GitLab MR page. Best for debugging test failures.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
merge_request_iidYesInternal ID of the merge request

Implementation Reference

  • The primary handler function that implements the tool logic: extracts MR IID, fetches the latest pipeline, retrieves test report data using helpers, and generates a comprehensive Markdown report highlighting test summary, failed tests with errors, skipped tests, and suite overviews.
    async def get_merge_request_test_report(gitlab_url, project_id, access_token, args):
        """Get the test report for a merge request's latest pipeline"""
        logging.info(f"get_merge_request_test_report called with args: {args}")
        mr_iid = args["merge_request_iid"]
    
        # First, get the latest pipeline for this MR
        try:
            pipeline_status, pipeline_data, pipeline_error = await get_merge_request_pipeline(
                gitlab_url, project_id, access_token, mr_iid
            )
        except Exception as e:
            logging.error(f"Error fetching pipeline: {e}")
            raise Exception(f"Error fetching pipeline for MR: {e}")
    
        if pipeline_status != 200 or not pipeline_data:
            result = f"# 📊 Test Report for Merge Request !{mr_iid}\n\n"
            result += "â„šī¸ No pipeline found for this merge request.\n\n"
            result += "Cannot fetch test report without a pipeline.\n"
            return [TextContent(type="text", text=result)]
    
        pipeline_id = pipeline_data.get("id")
        logging.info(f"Fetching test report for pipeline {pipeline_id}")
    
        # Now get the test report for this pipeline
        try:
            status, report_data, error = await get_pipeline_test_report(gitlab_url, project_id, access_token, pipeline_id)
        except Exception as e:
            logging.error(f"Error fetching test report: {e}")
            raise Exception(f"Error fetching test report: {e}")
    
        if status != 200:
            logging.error(f"Error fetching test report: {status} - {error}")
            if status == 404:
                result = f"# 📊 Test Report for Merge Request !{mr_iid}\n\n"
                result += "â„šī¸ No test report available for this merge request.\n\n"
                result += "This could mean:\n"
                result += "â€ĸ No CI/CD pipeline has run tests\n"
                result += "â€ĸ Tests don't upload JUnit XML or similar reports\n"
                result += "â€ĸ The pipeline is configured but no test "
                result += "artifacts were generated\n\n"
                result += "**💡 Tip:** To generate test reports, your CI jobs "
                result += "need to:\n"
                result += "1. Run tests that output JUnit XML format\n"
                result += "2. Use `artifacts:reports:junit` in .gitlab-ci.yml\n"
                return [TextContent(type="text", text=result)]
            raise Exception(f"Error fetching test report: {status} - {error}")
    
        # Format the test report
        result = f"# 📊 Test Report for Merge Request !{mr_iid}\n\n"
        result += f"**Pipeline**: #{pipeline_id}"
        if pipeline_data.get("web_url"):
            result += f" - [View Pipeline]({pipeline_data['web_url']})\n\n"
        else:
            result += "\n\n"
    
        total_time = report_data.get("total_time", 0)
        total_count = report_data.get("total_count", 0)
        success_count = report_data.get("success_count", 0)
        failed_count = report_data.get("failed_count", 0)
        skipped_count = report_data.get("skipped_count", 0)
        error_count = report_data.get("error_count", 0)
    
        # Summary
        result += "## 📋 Summary\n\n"
        result += f"**Total Tests**: {total_count}\n"
        result += f"**✅ Passed**: {success_count}\n"
        result += f"**❌ Failed**: {failed_count}\n"
        result += f"**âš ī¸ Errors**: {error_count}\n"
        result += f"**â­ī¸ Skipped**: {skipped_count}\n"
        result += f"**âąī¸ Total Time**: {total_time:.2f}s\n\n"
    
        if total_count == 0:
            result += "â„šī¸ No tests were found in the test report.\n"
            return [TextContent(type="text", text=result)]
    
        # Pass rate
        if total_count > 0:
            pass_rate = (success_count / total_count) * 100
            if pass_rate == 100:
                result += f"**🎉 Pass Rate**: {pass_rate:.1f}% - "
                result += "All tests passed!\n\n"
            else:
                result += f"**📊 Pass Rate**: {pass_rate:.1f}%\n\n"
    
        # Show failed tests first
        test_suites = report_data.get("test_suites", [])
    
        if failed_count > 0 or error_count > 0:
            result += "## ❌ Failed Tests\n\n"
    
            for suite in test_suites:
                suite_name = suite.get("name", "Unknown Suite")
                test_cases = suite.get("test_cases", [])
    
                failed_cases = [tc for tc in test_cases if tc.get("status") in ["failed", "error"]]
    
                if failed_cases:
                    result += f"### đŸ“Ļ {suite_name}\n\n"
    
                    for test_case in failed_cases:
                        test_name = test_case.get("name", "Unknown Test")
                        status = test_case.get("status", "unknown")
                        execution_time = test_case.get("execution_time", 0)
    
                        status_icon = "❌" if status == "failed" else "âš ī¸"
                        result += f"#### {status_icon} {test_name}\n\n"
                        result += f"**Status**: {status}\n"
                        result += f"**Duration**: {execution_time:.3f}s\n"
    
                        if test_case.get("classname"):
                            result += f"**Class**: `{test_case['classname']}`\n"
    
                        if test_case.get("file"):
                            result += f"**File**: `{test_case['file']}`\n"
    
                        # System output (error message)
                        if test_case.get("system_output"):
                            result += "\n**Error Output:**\n\n"
                            result += "```\n"
                            # Limit error output to reasonable size
                            error_output = test_case["system_output"]
                            if len(error_output) > 2000:
                                result += error_output[:2000]
                                result += "\n... (truncated)\n"
                            else:
                                result += error_output
                            result += "\n```\n"
    
                        result += "\n"
    
        # Show skipped tests if any
        if skipped_count > 0:
            result += "## â­ī¸ Skipped Tests\n\n"
    
            for suite in test_suites:
                suite_name = suite.get("name", "Unknown Suite")
                test_cases = suite.get("test_cases", [])
    
                skipped_cases = [tc for tc in test_cases if tc.get("status") == "skipped"]
    
                if skipped_cases:
                    result += f"### đŸ“Ļ {suite_name}\n\n"
                    for test_case in skipped_cases:
                        test_name = test_case.get("name", "Unknown Test")
                        result += f"- â­ī¸ {test_name}"
                        if test_case.get("classname"):
                            result += f" (`{test_case['classname']}`)"
                        result += "\n"
                    result += "\n"
    
        # Show test suites summary
        if len(test_suites) > 0:
            result += "## đŸ“Ļ Test Suites Overview\n\n"
            for suite in test_suites:
                suite_name = suite.get("name", "Unknown Suite")
                total = suite.get("total_count", 0)
                success = suite.get("success_count", 0)
                failed = suite.get("failed_count", 0)
                skipped = suite.get("skipped_count", 0)
                errors = suite.get("error_count", 0)
                suite_time = suite.get("total_time", 0)
    
                status_icon = "✅" if failed == 0 and errors == 0 else "❌"
                result += f"- {status_icon} **{suite_name}**: "
                result += f"{success}/{total} passed"
                if failed > 0:
                    result += f", {failed} failed"
                if errors > 0:
                    result += f", {errors} errors"
                if skipped > 0:
                    result += f", {skipped} skipped"
                result += f" ({suite_time:.2f}s)\n"
    
        # Add helpful tips if there are failures
        if failed_count > 0 or error_count > 0:
            result += "\n## 💡 Next Steps\n\n"
            result += "1. Review the error messages above\n"
            result += "2. Check the specific test files mentioned\n"
            result += "3. Use `get_job_log` to see full CI output if needed\n"
            result += "4. Run tests locally to reproduce the failures\n"
    
        return [TextContent(type="text", text=result)]
  • Defines the tool schema including name, description, and input schema requiring 'merge_request_iid' as integer.
    Tool(
        name="get_merge_request_test_report",
        description=(
            "Get structured test report for a merge request "
            "with specific test failures, error messages, and "
            "stack traces. Shows the same test data visible on "
            "the GitLab MR page. Best for debugging test failures."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "merge_request_iid": {
                    "type": "integer",
                    "minimum": 1,
                    "description": ("Internal ID of the merge request"),
                }
            },
            "required": ["merge_request_iid"],
            "additionalProperties": False,
        },
    ),
  • main.py:316-319 (registration)
    Registers the tool dispatch in the call_tool handler, mapping the tool name to the handler function call with config parameters.
    elif name == "get_merge_request_test_report":
        return await get_merge_request_test_report(
            self.config["gitlab_url"], self.config["project_id"], self.config["access_token"], arguments
        )
  • main.py:122-142 (registration)
    Tool object definition in list_tools() for MCP server registration, exposing the tool with its schema.
    Tool(
        name="get_merge_request_test_report",
        description=(
            "Get structured test report for a merge request "
            "with specific test failures, error messages, and "
            "stack traces. Shows the same test data visible on "
            "the GitLab MR page. Best for debugging test failures."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "merge_request_iid": {
                    "type": "integer",
                    "minimum": 1,
                    "description": ("Internal ID of the merge request"),
                }
            },
            "required": ["merge_request_iid"],
            "additionalProperties": False,
        },
    ),
  • Imports helper functions from gitlab_api used to fetch pipeline and test report data.
    from gitlab_api import get_merge_request_pipeline, get_pipeline_test_report
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool retrieves test data (implying read-only behavior) and specifies the content includes failures, errors, and stack traces. However, it lacks details on permissions, rate limits, or response format, which are important for a tool with no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific details and usage guidance in just two sentences. Every sentence adds value: the first defines the tool's function and scope, and the second provides clear usage context without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations and no output schema, the description is moderately complete. It covers purpose and usage well but lacks details on behavioral aspects like authentication or response structure, which are important for debugging tools. It compensates somewhat with specific content details but leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter (merge_request_iid) with its type and description. The description does not add any additional meaning or context about the parameter beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get structured test report') and resource ('for a merge request'), distinguishing it from siblings like get_merge_request_details or get_pipeline_test_summary by focusing on test failures, error messages, and stack traces. It explicitly mentions the data is 'the same test data visible on the GitLab MR page,' which adds specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Best for debugging test failures.' This clearly indicates its primary use case and distinguishes it from alternatives like get_pipeline_test_summary (which might show summaries) or get_merge_request_details (which covers general info).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/amirsina-mandegari/gitlab-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server