Skip to main content
Glama

get_job_log

Retrieve pipeline job trace output to debug failed tests and analyze CI/CD failures in GitLab.

Instructions

Get the trace/log output for a specific pipeline job. Perfect for debugging failed tests and understanding CI/CD failures.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_idYesID of the pipeline job (obtained from get_merge_request_pipeline)

Implementation Reference

  • The primary handler function for the 'get_job_log' tool. Fetches the job trace from GitLab using the get_job_trace helper, formats the log output with Markdown including size info and truncation if necessary, and returns it as TextContent.
    async def get_job_log(gitlab_url, project_id, access_token, args):
        """Get the trace/log output for a specific pipeline job"""
        logging.info(f"get_job_log called with args: {args}")
        job_id = args["job_id"]
    
        try:
            status, log_data, error = await get_job_trace(gitlab_url, project_id, access_token, job_id)
        except Exception as e:
            logging.error(f"Error fetching job log: {e}")
            raise Exception(f"Error fetching job log: {e}")
    
        if status != 200:
            logging.error(f"Error fetching job log: {status} - {error}")
            raise Exception(f"Error fetching job log: {status} - {error}")
    
        if not log_data or len(log_data.strip()) == 0:
            result = f"# 📋 Job Log (Job ID: {job_id})\n\n"
            result += "â„šī¸ No log output available for this job.\n\n"
            result += "This could mean:\n"
            result += "â€ĸ The job hasn't started yet\n"
            result += "â€ĸ The job was skipped\n"
            result += "â€ĸ The log has been archived or deleted\n"
            return [TextContent(type="text", text=result)]
    
        # Format the output
        result = f"# 📋 Job Log (Job ID: {job_id})\n\n"
    
        # Add log size info
        log_size_kb = len(log_data) / 1024
        result += f"**📊 Log Size**: {log_size_kb:.2f} KB\n"
        result += f"**📄 Lines**: {log_data.count(chr(10)) + 1}\n\n"
    
        # Check if we need to truncate
        max_chars = 15000  # Keep logs reasonable for context
        if len(log_data) > max_chars:
            result += "## 📝 Job Output (Last 15,000 characters)\n\n"
            result += "```\n"
            result += log_data[-max_chars:]
            result += "\n```\n\n"
            result += f"*âš ī¸ Note: Log truncated from {len(log_data):,} to "
            result += f"{max_chars:,} characters (showing last portion)*\n"
        else:
            result += "## 📝 Job Output\n\n"
            result += "```\n"
            result += log_data
            result += "\n```\n"
    
        return [TextContent(type="text", text=result)]
  • Input schema definition for the get_job_log tool within the list_tools response. Requires a 'job_id' integer parameter.
    Tool(
        name="get_job_log",
        description=(
            "Get the trace/log output for a specific pipeline "
            "job. Perfect for debugging failed tests and "
            "understanding CI/CD failures."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "job_id": {
                    "type": "integer",
                    "minimum": 1,
                    "description": ("ID of the pipeline job (obtained from " "get_merge_request_pipeline)"),
                }
            },
            "required": ["job_id"],
            "additionalProperties": False,
        },
    ),
  • main.py:324-327 (registration)
    Tool dispatch logic in call_tool method that routes requests for 'get_job_log' to the handler function, passing config parameters.
    elif name == "get_job_log":
        return await get_job_log(
            self.config["gitlab_url"], self.config["project_id"], self.config["access_token"], arguments
        )
  • Re-export of the get_job_log handler from its module for convenient import in main.py.
    from .get_job_log import get_job_log
    from .get_merge_request_details import get_merge_request_details
    from .get_merge_request_pipeline import get_merge_request_pipeline
    from .get_merge_request_reviews import get_merge_request_reviews
    from .get_merge_request_test_report import get_merge_request_test_report
    from .get_pipeline_test_summary import get_pipeline_test_summary
    from .list_merge_requests import list_merge_requests
    from .reply_to_review_comment import create_review_comment, reply_to_review_comment, resolve_review_discussion
    
    __all__ = [
        "list_merge_requests",
        "get_merge_request_reviews",
        "get_merge_request_details",
        "get_merge_request_pipeline",
        "get_merge_request_test_report",
        "get_pipeline_test_summary",
        "get_job_log",
        "get_branch_merge_requests",
        "reply_to_review_comment",
        "create_review_comment",
        "resolve_review_discussion",
        "get_commit_discussions",
    ]
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool's purpose (retrieving logs for debugging) which implies it's a read-only operation, but doesn't explicitly state whether it requires authentication, has rate limits, or what format the output takes. The description adds some context but leaves important behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with just two sentences that each earn their place. The first sentence states the core functionality, and the second provides valuable context about when to use it. There's no wasted language or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation with no output schema, the description provides adequate but minimal context. It explains what the tool does and when to use it, but doesn't address potential limitations, error conditions, or output format details that would be helpful for an agent to properly invoke and interpret results from this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'job_id' well-documented in the schema itself. The description doesn't add any additional parameter information beyond what's already in the schema, so it meets the baseline expectation but doesn't provide extra value regarding parameter usage or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the trace/log output') and resource ('for a specific pipeline job'), distinguishing it from siblings like get_merge_request_test_report or get_pipeline_test_summary. It provides a concrete use case ('debugging failed tests and understanding CI/CD failures') that makes the purpose immediately understandable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('Perfect for debugging failed tests and understanding CI/CD failures'), which helps the agent understand the appropriate scenarios. However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools, which would be needed for a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/amirsina-mandegari/gitlab-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server