Skip to main content
Glama

list_vllm_containers

Lists all vLLM Docker containers to monitor running instances and manage container status across platforms.

Instructions

List all vLLM Docker containers

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
allNoShow all containers including stopped ones

Implementation Reference

  • The main handler function that implements list_vllm_containers. It gets platform info, checks if runtime is running, constructs and executes the 'docker ps' or 'podman ps' command with optional '-a' flag, and returns formatted container information as TextContent.
    async def list_vllm_containers(arguments: dict[str, Any]) -> list[TextContent]:
        """
        List all vLLM-related containers.
    
        Args:
            arguments: Dictionary containing:
                - all: Show all containers including stopped (default: False)
    
        Returns:
            List of TextContent with container information.
        """
        platform_info = await get_platform_info()
        if not platform_info.runtime_running:
            runtime_name = platform_info.container_runtime.value.capitalize() if platform_info.container_runtime != ContainerRuntime.NONE else "Container runtime"
            return [TextContent(type="text", text=f"❌ Error: {runtime_name} is not running.")]
    
        runtime_cmd = _get_runtime_cmd(platform_info.container_runtime)
        show_all = arguments.get("all", False)
        
        cmd = [runtime_cmd, "ps"]
        if show_all:
            cmd.append("-a")
        cmd.extend([
            "--format", "table {{.Names}}\t{{.Status}}\t{{.Ports}}\t{{.Image}}"
        ])
        
        exit_code, stdout, stderr = await _run_command(cmd)
        
        if exit_code != 0:
            return [TextContent(type="text", text=f"❌ Error listing containers: {stderr}")]
        
        if not stdout.strip() or stdout.strip() == "NAMES\tSTATUS\tPORTS\tIMAGE":
            return [TextContent(
                type="text",
                text="ℹ️ No containers found.\n\nUse `start_vllm` to create one."
            )]
        
        runtime_name = platform_info.container_runtime.value.capitalize()
        return [TextContent(
            type="text",
            text=f"## {runtime_name} Containers\n\n```\n{stdout}\n```"
        )]
  • Tool registration with name, description, and input schema defining the 'all' boolean parameter that controls whether stopped containers are shown.
        name="list_vllm_containers",
        description="List all vLLM Docker containers",
        inputSchema={
            "type": "object",
            "properties": {
                "all": {
                    "type": "boolean",
                    "description": "Show all containers including stopped ones",
                    "default": False,
                },
            },
        },
    ),
  • The tool invocation handler that routes calls to list_vllm_containers to the actual handler function.
    elif name == "list_vllm_containers":
        return await list_vllm_containers(arguments)
  • Helper function _get_runtime_cmd that returns the appropriate container runtime command ('podman' or 'docker') based on the ContainerRuntime enum.
    def _get_runtime_cmd(runtime: ContainerRuntime) -> str:
        """Get the command for the container runtime."""
        if runtime == ContainerRuntime.PODMAN:
            return "podman"
        return "docker"
  • Helper function _run_command that executes shell commands asynchronously using asyncio subprocess, handling timeouts and exceptions, and returning exit code, stdout, and stderr.
    async def _run_command(cmd: list[str], timeout: float = 30.0) -> tuple[int, str, str]:
        """Run a shell command and return exit code, stdout, stderr."""
        try:
            process = await asyncio.create_subprocess_exec(
                *cmd,
                stdout=asyncio.subprocess.PIPE,
                stderr=asyncio.subprocess.PIPE,
            )
            stdout, stderr = await asyncio.wait_for(
                process.communicate(),
                timeout=timeout,
            )
            return (
                process.returncode or 0,
                stdout.decode("utf-8"),
                stderr.decode("utf-8"),
            )
        except asyncio.TimeoutError:
            return (1, "", "Command timed out")
        except Exception as e:
            return (1, "", str(e))

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/micytao/vllm-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server