get_vllm_logs
Retrieve logs from vLLM containers to monitor loading progress and diagnose errors in AI model deployment environments.
Instructions
Get logs from a vLLM container to check loading progress or errors
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| container_name | No | Name of the container | |
| tail | No | Number of log lines to show |
Implementation Reference
- Main implementation of get_vllm_logs function that fetches container logs using docker/podman runtime. Takes optional container_name and tail parameters, checks if container exists, runs the logs command, and returns formatted output with tips for following logs in real-time.
async def get_vllm_logs(arguments: dict[str, Any]) -> list[TextContent]: """ Get logs from a vLLM container. Args: arguments: Dictionary containing: - container_name: Name of container (default: from settings) - tail: Number of lines to show (default: 50) - follow: Whether to show note about following (default: False) Returns: List of TextContent with container logs. """ settings = get_settings() platform_info = await get_platform_info() if not platform_info.runtime_running: runtime_name = platform_info.container_runtime.value.capitalize() if platform_info.container_runtime != ContainerRuntime.NONE else "Container runtime" return [TextContent(type="text", text=f"❌ Error: {runtime_name} is not running.")] runtime_cmd = _get_runtime_cmd(platform_info.container_runtime) container_name = arguments.get("container_name", settings.container_name) tail = arguments.get("tail", 50) if not await _is_container_exists(container_name, platform_info.container_runtime): return [TextContent( type="text", text=f"❌ Container '{container_name}' does not exist." )] exit_code, stdout, stderr = await _run_command( [runtime_cmd, "logs", "--tail", str(tail), container_name] ) if exit_code != 0: return [TextContent(type="text", text=f"❌ Error getting logs: {stderr}")] # Combine stdout and stderr (vLLM logs to stderr) logs = stdout + stderr return [TextContent( type="text", text=f"## Logs for '{container_name}' (last {tail} lines)\n\n```\n{logs}\n```\n\n" f"💡 **Tip:** Run `{runtime_cmd} logs -f {container_name}` in terminal to follow logs in real-time." )] - src/vllm_mcp_server/server.py:267-283 (registration)Tool registration defining the get_vllm_logs tool schema with name, description, and inputSchema properties (container_name as string, tail as integer with default 50).
name="get_vllm_logs", description="Get logs from a vLLM container to check loading progress or errors", inputSchema={ "type": "object", "properties": { "container_name": { "type": "string", "description": "Name of the container", }, "tail": { "type": "integer", "description": "Number of log lines to show", "default": 50, }, }, }, ), - src/vllm_mcp_server/server.py:361-362 (registration)Handler invocation that calls get_vllm_logs(arguments) when the tool is invoked via the MCP server.
elif name == "get_vllm_logs": return await get_vllm_logs(arguments) - src/vllm_mcp_server/server.py:26-33 (registration)Import statement that brings get_vllm_logs from vllm_mcp_server.tools.server_control module.
from vllm_mcp_server.tools.server_control import ( get_platform_status, get_vllm_logs, list_vllm_containers, restart_vllm, start_vllm, stop_vllm, ) - src/vllm_mcp_server/tools/__init__.py:5-29 (registration)Module export of get_vllm_logs function, making it available for import by other parts of the codebase.
from vllm_mcp_server.tools.server_control import ( get_platform_info, get_platform_status, get_vllm_logs, list_vllm_containers, restart_vllm, start_vllm, stop_vllm, ) from vllm_mcp_server.tools.benchmark import run_benchmark __all__ = [ "handle_chat", "handle_complete", "list_models", "get_model_info", "start_vllm", "stop_vllm", "restart_vllm", "list_vllm_containers", "get_vllm_logs", "get_platform_info", "get_platform_status", "run_benchmark", ]