get_job_log
Retrieve pipeline job trace output to debug failed tests and analyze CI/CD failures by providing the job ID.
Instructions
Get the trace/log output for a specific pipeline job. Perfect for debugging failed tests and understanding CI/CD failures.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | ID of the pipeline job (obtained from get_merge_request_pipeline) |
Input Schema (JSON Schema)
{
"properties": {
"job_id": {
"description": "ID of the pipeline job (obtained from get_merge_request_pipeline)",
"minimum": 1,
"type": "integer"
}
},
"required": [
"job_id"
],
"type": "object"
}
Implementation Reference
- tools/get_job_log.py:8-55 (handler)The core handler function implementing the get_job_log tool. It extracts job_id from args, fetches the job trace using gitlab_api.get_job_trace, handles errors, formats the log output with size info and truncation if necessary, and returns formatted TextContent.async def get_job_log(gitlab_url, project_id, access_token, args): """Get the trace/log output for a specific pipeline job""" logging.info(f"get_job_log called with args: {args}") job_id = args["job_id"] try: status, log_data, error = await get_job_trace(gitlab_url, project_id, access_token, job_id) except Exception as e: logging.error(f"Error fetching job log: {e}") raise Exception(f"Error fetching job log: {e}") if status != 200: logging.error(f"Error fetching job log: {status} - {error}") raise Exception(f"Error fetching job log: {status} - {error}") if not log_data or len(log_data.strip()) == 0: result = f"# š Job Log (Job ID: {job_id})\n\n" result += "ā¹ļø No log output available for this job.\n\n" result += "This could mean:\n" result += "⢠The job hasn't started yet\n" result += "⢠The job was skipped\n" result += "⢠The log has been archived or deleted\n" return [TextContent(type="text", text=result)] # Format the output result = f"# š Job Log (Job ID: {job_id})\n\n" # Add log size info log_size_kb = len(log_data) / 1024 result += f"**š Log Size**: {log_size_kb:.2f} KB\n" result += f"**š Lines**: {log_data.count(chr(10)) + 1}\n\n" # Check if we need to truncate max_chars = 15000 # Keep logs reasonable for context if len(log_data) > max_chars: result += "## š Job Output (Last 15,000 characters)\n\n" result += "```\n" result += log_data[-max_chars:] result += "\n```\n\n" result += f"*ā ļø Note: Log truncated from {len(log_data):,} to " result += f"{max_chars:,} characters (showing last portion)*\n" else: result += "## š Job Output\n\n" result += "```\n" result += log_data result += "\n```\n" return [TextContent(type="text", text=result)]
- main.py:163-182 (schema)The Tool schema definition in list_tools(), including name, description, and inputSchema requiring 'job_id' integer.Tool( name="get_job_log", description=( "Get the trace/log output for a specific pipeline " "job. Perfect for debugging failed tests and " "understanding CI/CD failures." ), inputSchema={ "type": "object", "properties": { "job_id": { "type": "integer", "minimum": 1, "description": ("ID of the pipeline job (obtained from " "get_merge_request_pipeline)"), } }, "required": ["job_id"], "additionalProperties": False, }, ),
- main.py:323-326 (registration)Registration in the call_tool dispatcher that routes calls to 'get_job_log' to the handler function, injecting config values.elif name == "get_job_log": return await get_job_log( self.config["gitlab_url"], self.config["project_id"], self.config["access_token"], arguments )
- tools/__init__.py:10-10 (registration)Import of the get_job_log handler in the tools package __init__.from .get_job_log import get_job_log
- tools/__init__.py:26-26 (registration)Inclusion of get_job_log in the tools __all__ export list."get_job_log",