Skip to main content
Glama

get_vllm_logs

Retrieve logs from vLLM containers to monitor loading progress and diagnose errors in AI model deployment environments.

Instructions

Get logs from a vLLM container to check loading progress or errors

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
container_nameNoName of the container
tailNoNumber of log lines to show

Implementation Reference

  • Main implementation of get_vllm_logs function that fetches container logs using docker/podman runtime. Takes optional container_name and tail parameters, checks if container exists, runs the logs command, and returns formatted output with tips for following logs in real-time.
    async def get_vllm_logs(arguments: dict[str, Any]) -> list[TextContent]:
        """
        Get logs from a vLLM container.
    
        Args:
            arguments: Dictionary containing:
                - container_name: Name of container (default: from settings)
                - tail: Number of lines to show (default: 50)
                - follow: Whether to show note about following (default: False)
    
        Returns:
            List of TextContent with container logs.
        """
        settings = get_settings()
        
        platform_info = await get_platform_info()
        if not platform_info.runtime_running:
            runtime_name = platform_info.container_runtime.value.capitalize() if platform_info.container_runtime != ContainerRuntime.NONE else "Container runtime"
            return [TextContent(type="text", text=f"❌ Error: {runtime_name} is not running.")]
    
        runtime_cmd = _get_runtime_cmd(platform_info.container_runtime)
        container_name = arguments.get("container_name", settings.container_name)
        tail = arguments.get("tail", 50)
    
        if not await _is_container_exists(container_name, platform_info.container_runtime):
            return [TextContent(
                type="text",
                text=f"❌ Container '{container_name}' does not exist."
            )]
    
        exit_code, stdout, stderr = await _run_command(
            [runtime_cmd, "logs", "--tail", str(tail), container_name]
        )
        
        if exit_code != 0:
            return [TextContent(type="text", text=f"❌ Error getting logs: {stderr}")]
        
        # Combine stdout and stderr (vLLM logs to stderr)
        logs = stdout + stderr
        
        return [TextContent(
            type="text",
            text=f"## Logs for '{container_name}' (last {tail} lines)\n\n```\n{logs}\n```\n\n"
                 f"💡 **Tip:** Run `{runtime_cmd} logs -f {container_name}` in terminal to follow logs in real-time."
        )]
  • Tool registration defining the get_vllm_logs tool schema with name, description, and inputSchema properties (container_name as string, tail as integer with default 50).
        name="get_vllm_logs",
        description="Get logs from a vLLM container to check loading progress or errors",
        inputSchema={
            "type": "object",
            "properties": {
                "container_name": {
                    "type": "string",
                    "description": "Name of the container",
                },
                "tail": {
                    "type": "integer",
                    "description": "Number of log lines to show",
                    "default": 50,
                },
            },
        },
    ),
  • Handler invocation that calls get_vllm_logs(arguments) when the tool is invoked via the MCP server.
    elif name == "get_vllm_logs":
        return await get_vllm_logs(arguments)
  • Import statement that brings get_vllm_logs from vllm_mcp_server.tools.server_control module.
    from vllm_mcp_server.tools.server_control import (
        get_platform_status,
        get_vllm_logs,
        list_vllm_containers,
        restart_vllm,
        start_vllm,
        stop_vllm,
    )
  • Module export of get_vllm_logs function, making it available for import by other parts of the codebase.
    from vllm_mcp_server.tools.server_control import (
        get_platform_info,
        get_platform_status,
        get_vllm_logs,
        list_vllm_containers,
        restart_vllm,
        start_vllm,
        stop_vllm,
    )
    from vllm_mcp_server.tools.benchmark import run_benchmark
    
    __all__ = [
        "handle_chat",
        "handle_complete",
        "list_models",
        "get_model_info",
        "start_vllm",
        "stop_vllm",
        "restart_vllm",
        "list_vllm_containers",
        "get_vllm_logs",
        "get_platform_info",
        "get_platform_status",
        "run_benchmark",
    ]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the purpose ('check loading progress or errors') but doesn't describe what the logs contain, format, whether they're real-time or historical, authentication requirements, rate limits, or error conditions. For a read operation with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core purpose without unnecessary words. It's appropriately sized for a simple tool and front-loads the essential information ('Get logs from a vLLM container') followed by the specific use case.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with full schema coverage but no annotations and no output schema, the description provides basic purpose but lacks behavioral context needed for a logging tool. It doesn't explain what the logs contain, their format, or how to interpret them for 'checking progress or errors.' For a tool that presumably returns textual log data, more guidance would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('container_name' and 'tail') with their types and default. The description doesn't add any parameter-specific information beyond what's in the schema, such as examples of container names or clarification about tail behavior. Baseline 3 is appropriate when schema does the documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get logs') and target resource ('from a vLLM container'), with a specific purpose ('to check loading progress or errors'). It distinguishes from siblings like 'vllm_status' or 'get_platform_status' by focusing on container logs rather than status information. However, it doesn't explicitly differentiate from potential log-related tools that might exist in the future.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('to check loading progress or errors') but doesn't provide explicit guidance on when to use this tool versus alternatives like 'vllm_status' for general status or 'list_vllm_containers' to identify containers first. No exclusions or prerequisites are mentioned, leaving the agent to infer appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/micytao/vllm-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server