Skip to main content
Glama
dylan-gluck

MCP Background Job Server

by dylan-gluck

tail_job_output

Retrieve the last N lines of stdout and stderr output from a background job to monitor execution progress and debug issues.

Instructions

Get the last N lines of stdout and stderr from a job.

Args: job_id: The UUID of the job to tail lines: Number of lines to return (1-1000, default 50)

Returns: ProcessOutput containing the last N lines of stdout and stderr

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_idYesJob ID to tail
linesNoNumber of lines to return

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
stderrYesStandard error content
stdoutYesStandard output content

Implementation Reference

  • The primary MCP tool handler for 'tail_job_output', decorated with @mcp.tool() for automatic registration. Handles input validation via Pydantic Fields and error handling, delegating core logic to JobManager.
    @mcp.tool()
    async def tail_job_output(
        job_id: str = Field(..., description="Job ID to tail"),
        lines: int = Field(50, description="Number of lines to return", ge=1, le=1000),
    ) -> ProcessOutput:
        """Get the last N lines of stdout and stderr from a job.
    
        Args:
            job_id: The UUID of the job to tail
            lines: Number of lines to return (1-1000, default 50)
    
        Returns:
            ProcessOutput containing the last N lines of stdout and stderr
        """
        try:
            job_manager = get_job_manager()
            job_output = await job_manager.tail_job_output(job_id, lines)
            return job_output
        except KeyError:
            raise ToolError(f"Job {job_id} not found")
        except ValueError as e:
            raise ToolError(f"Invalid parameter: {str(e)}")
        except Exception as e:
            logger.error(f"Error tailing job output for {job_id}: {e}")
            raise ToolError(f"Failed to tail job output: {str(e)}")
  • Pydantic model defining the output schema for tail_job_output, containing stdout and stderr strings.
    class ProcessOutput(BaseModel):
        """Structured stdout/stderr output from a process."""
    
        stdout: str = Field(..., description="Standard output content")
        stderr: str = Field(..., description="Standard error content")
  • JobManager.tail_job_output helper method that validates job existence and lines parameter, then delegates to ProcessWrapper.tail_output.
    async def tail_job_output(self, job_id: str, lines: int) -> ProcessOutput:
        """Get last N lines of output.
    
        Args:
            job_id: Job identifier
            lines: Number of lines to return
    
        Returns:
            ProcessOutput with last N lines of stdout and stderr
    
        Raises:
            KeyError: If job_id doesn't exist
            ValueError: If lines is not positive
        """
        if job_id not in self._jobs:
            raise KeyError(f"Job {job_id} not found")
    
        if lines <= 0:
            raise ValueError("Number of lines must be positive")
    
        process_wrapper = self._processes.get(job_id)
        if process_wrapper is None:
            return ProcessOutput(stdout="", stderr="")
    
        return process_wrapper.tail_output(lines)
  • ProcessWrapper.tail_output core implementation that thread-safely extracts the last N lines from stdout and stderr ring buffers (deques).
    def tail_output(self, lines: int) -> ProcessOutput:
        """Get last N lines of output.
    
        Args:
            lines: Number of lines to return from the end
    
        Returns:
            ProcessOutput containing last N lines of stdout and stderr
        """
        with self._buffer_lock:
            # Get last N lines from each buffer
            stdout_lines = list(self.stdout_buffer)[-lines:] if lines > 0 else []
            stderr_lines = list(self.stderr_buffer)[-lines:] if lines > 0 else []
    
        return ProcessOutput(
            stdout="\n".join(stdout_lines), stderr="\n".join(stderr_lines)
        )
  • Pydantic model defining the input schema for tail_job_output tool (though inline Fields are used in handler).
    class TailInput(BaseModel):
        """Input for tail tool."""
    
        job_id: str = Field(..., description="Job ID to tail")
        lines: int = Field(50, description="Number of lines to return", ge=1, le=1000)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the tool's read-only nature (implied by 'Get') and the lines parameter constraints (1-1000, default 50), but lacks details on permissions, rate limits, error conditions, or whether the job must be active. It adds some behavioral context but is incomplete for a tool that interacts with job execution.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by structured Args and Returns sections. Every sentence earns its place by providing essential information without redundancy. The formatting is clear and efficient, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, job interaction), no annotations, and the presence of an output schema (Returns section), the description is mostly complete. It covers purpose, parameters, and return values, but lacks behavioral details like error handling or job state requirements, which would be beneficial for full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by clarifying that 'lines' refers to 'Number of lines to return' for both stdout and stderr, and specifies the range (1-1000) and default (50), which enhances understanding beyond the schema's basic documentation. However, it does not explain the 'job_id' parameter further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the last N lines') and resource ('stdout and stderr from a job'), distinguishing it from siblings like get_job_output (likely full output) and get_job_status (status only). The verb 'tail' is precise and matches the tool name, providing immediate understanding of its function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for viewing recent job output, but does not explicitly state when to use this tool versus alternatives like get_job_output or get_job_status. No guidance is provided on prerequisites, such as needing a running or completed job, or exclusions for when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dylan-gluck/mcp-background-job'

If you have feedback or need assistance with the MCP directory API, please join our Discord server