Skip to main content
Glama

stop_vllm

Stop a running vLLM Docker container with options to remove it after stopping and set a timeout before force killing.

Instructions

Stop a running vLLM Docker container

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
container_nameNoName of the container to stop
removeNoWhether to remove the container after stopping
timeoutNoSeconds to wait before force killing

Implementation Reference

  • Main handler implementation for stop_vllm tool. This async function stops and optionally removes a vLLM Docker/Podman container. It validates that the container runtime is running, checks if the container exists and is running, executes the stop command with timeout, and optionally removes the container.
    async def stop_vllm(arguments: dict[str, Any]) -> list[TextContent]:
        """
        Stop a running vLLM container.
    
        Args:
            arguments: Dictionary containing:
                - container_name: Name of container to stop (default: from settings)
                - remove: Whether to remove the container (default: True)
                - timeout: Seconds to wait before killing (default: 10)
    
        Returns:
            List of TextContent with the result.
        """
        settings = get_settings()
        
        platform_info = await get_platform_info()
        if not platform_info.runtime_running:
            runtime_name = platform_info.container_runtime.value.capitalize() if platform_info.container_runtime != ContainerRuntime.NONE else "Container runtime"
            return [TextContent(type="text", text=f"❌ Error: {runtime_name} is not running.")]
    
        runtime_cmd = _get_runtime_cmd(platform_info.container_runtime)
        container_name = arguments.get("container_name", settings.container_name)
        remove = arguments.get("remove", True)
        timeout = arguments.get("timeout", 10)
    
        # Check if running
        is_running = await _is_container_running(container_name, platform_info.container_runtime)
        exists = await _is_container_exists(container_name, platform_info.container_runtime)
        
        if not exists:
            return [TextContent(
                type="text",
                text=f"ℹ️ Container '{container_name}' does not exist."
            )]
        
        result_parts = []
        
        if is_running:
            # Stop container with timeout
            exit_code, _, stderr = await _run_command(
                [runtime_cmd, "stop", "-t", str(timeout), container_name]
            )
            if exit_code != 0:
                return [TextContent(
                    type="text",
                    text=f"❌ Failed to stop container: {stderr}"
                )]
            result_parts.append(f"✅ Container '{container_name}' stopped.")
        else:
            result_parts.append(f"ℹ️ Container '{container_name}' was not running.")
    
        # Remove container if requested
        if remove:
            exit_code, _, stderr = await _run_command([runtime_cmd, "rm", container_name])
            if exit_code == 0:
                result_parts.append(f"✅ Container '{container_name}' removed.")
            else:
                result_parts.append(f"⚠️ Failed to remove container: {stderr}")
    
        return [TextContent(type="text", text="\n".join(result_parts))]
  • Tool registration and schema definition for stop_vllm. Defines the Tool object with name, description, and inputSchema specifying container_name (string), remove (boolean, default True), and timeout (integer, default 10) parameters.
    Tool(
        name="stop_vllm",
        description="Stop a running vLLM Docker container",
        inputSchema={
            "type": "object",
            "properties": {
                "container_name": {
                    "type": "string",
                    "description": "Name of the container to stop",
                },
                "remove": {
                    "type": "boolean",
                    "description": "Whether to remove the container after stopping",
                    "default": True,
                },
                "timeout": {
                    "type": "integer",
                    "description": "Seconds to wait before force killing",
                    "default": 10,
                },
            },
        },
    ),
  • Handler invocation in the tool dispatch logic. Routes calls to stop_vllm tool to the imported handler function with the provided arguments.
    elif name == "stop_vllm":
        return await stop_vllm(arguments)
  • Helper functions _is_container_running and _is_container_exists used by stop_vllm to check container status before attempting to stop it.
    async def _is_container_running(container_name: str, runtime: ContainerRuntime) -> bool:
        """Check if a container is running."""
        cmd = _get_runtime_cmd(runtime)
        exit_code, stdout, _ = await _run_command([
            cmd, "ps", "--filter", f"name=^{container_name}$", "--format", "{{.Names}}"
        ])
        return exit_code == 0 and container_name in stdout.strip().split("\n")
    
    
    async def _is_container_exists(container_name: str, runtime: ContainerRuntime) -> bool:
        """Check if a container exists (running or stopped)."""
        cmd = _get_runtime_cmd(runtime)
        exit_code, stdout, _ = await _run_command([
            cmd, "ps", "-a", "--filter", f"name=^{container_name}$", "--format", "{{.Names}}"
        ])
        return exit_code == 0 and container_name in stdout.strip().split("\n")
  • Helper function _run_command that executes shell commands asynchronously, used by stop_vllm to run docker/podman commands for stopping and removing containers.
    async def _run_command(cmd: list[str], timeout: float = 30.0) -> tuple[int, str, str]:
        """Run a shell command and return exit code, stdout, stderr."""
        try:
            process = await asyncio.create_subprocess_exec(
                *cmd,
                stdout=asyncio.subprocess.PIPE,
                stderr=asyncio.subprocess.PIPE,
            )
            stdout, stderr = await asyncio.wait_for(
                process.communicate(),
                timeout=timeout,
            )
            return (
                process.returncode or 0,
                stdout.decode("utf-8"),
                stderr.decode("utf-8"),
            )
        except asyncio.TimeoutError:
            return (1, "", "Command timed out")
        except Exception as e:
            return (1, "", str(e))
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the basic action. It doesn't disclose critical behavioral traits: whether this is destructive (it likely is), what happens if the container doesn't exist, whether it requires specific permissions, or what the response looks like. For a tool that stops containers, this is a significant gap in safety and operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core purpose without any wasted words. It's perfectly front-loaded with the essential information. Every word earns its place in this minimal but complete statement of function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a potentially destructive operation with no annotations and no output schema, the description is insufficiently complete. It doesn't explain what 'stop' entails (graceful shutdown vs force kill), what happens by default (the remove parameter defaults to true), or what the tool returns. For a container management tool, more operational context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are documented in the schema. The description adds no parameter information beyond what's in the schema. The baseline score of 3 reflects adequate coverage through the schema alone, but the description provides no additional semantic context about parameter interactions or implications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Stop') and target resource ('a running vLLM Docker container'), distinguishing it from sibling tools like 'restart_vllm' or 'list_vllm_containers'. It uses precise technical terminology that leaves no ambiguity about what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'restart_vllm' or 'list_vllm_containers'. It doesn't mention prerequisites (e.g., that a container must be running) or typical use cases (e.g., cleanup, resource management). The agent must infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/micytao/vllm-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server