Skip to main content
Glama

gitlab_get_job_log

Read-onlyIdempotent

Fetch GitLab CI job logs, showing last lines for failure diagnosis or filtering by regex to locate errors with surrounding context.

Instructions

Fetch the trace/log of a job, with optional regex filter.

Two modes:

  • Default: return the last tail lines (token-efficient, good for "why did this just fail?").

  • With grep_pattern: return only matching lines with grep_context surrounding lines on each side — ideal for finding "ERROR" / "Traceback" in megabyte-scale CI logs without pulling the whole trace into context.

Examples: - "Why did job 789 fail" → default tail=100, look at the end of the log - "Show me the first stage output of job 789" → tail=5000 and scan for stage separator - "Find every Traceback in job 789" → grep_pattern='Traceback', grep_context=5 - "All ERROR lines from job 789" → grep_pattern='ERROR|FAIL'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_idYesNumeric job ID (from ``gitlab_get_pipeline_jobs``).
tailNoReturn only the last N lines (1–5000, default 100).
grep_patternNoOptional regex — when set, returns only lines matching the pattern (with ``grep_context`` surrounding lines) instead of the tail. Great for finding errors in huge logs without downloading everything. Invalid regex falls back to literal substring match.
grep_contextNoSurrounding lines to include around each grep match (0–20).
project_pathNoGitLab project path (e.g. 'my-org/my-repo'). When omitted, the default from GITLAB_PROJECT_PATH env var is used.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_idNo
total_linesNo
showing_lastNo
logNo
grep_patternNo
grep_matchesNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent. The description adds behavioral details: how tail and grep modes work, including the fallback to literal substring for invalid regex. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise, starting with a clear one-liner, then splitting into two modes, followed by practical examples. Every sentence adds value, and the structure is logically organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters, a rich input schema, and an output schema, the description covers all necessary aspects: purpose, modes, parameter hints, and examples. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing clear parameter descriptions. The description adds value by explaining how parameters interact (e.g., grep_pattern and grep_context) and providing concrete usage examples that illustrate parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches job logs, with two distinct modes (default tail and grep). It uses specific verbs ('Fetch the trace/log') and resource ('of a job'). While it does not explicitly distinguish from siblings, the purpose is unique and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with examples for each mode (e.g., 'Why did job 789 fail' → default tail, 'Find every Traceback' → grep). It explains when to use each mode but does not mention when not to use the tool or alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mshegolev/gitlab-ci-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server