Skip to main content
Glama

restart_vllm

Restart a vLLM Docker container to resolve issues or apply configuration changes, enabling continued AI model serving.

Instructions

Restart a vLLM Docker container

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
container_nameNoName of the container to restart

Implementation Reference

  • The main handler function that restarts a vLLM container. It checks if the runtime is running, gets the container name, verifies the container exists, executes the restart command, and returns appropriate success/error messages.
    async def restart_vllm(arguments: dict[str, Any]) -> list[TextContent]:
        """
        Restart a vLLM container.
    
        Args:
            arguments: Dictionary containing:
                - container_name: Name of container to restart (default: from settings)
    
        Returns:
            List of TextContent with the result.
        """
        settings = get_settings()
        
        platform_info = await get_platform_info()
        if not platform_info.runtime_running:
            runtime_name = platform_info.container_runtime.value.capitalize() if platform_info.container_runtime != ContainerRuntime.NONE else "Container runtime"
            return [TextContent(type="text", text=f"❌ Error: {runtime_name} is not running.")]
    
        runtime_cmd = _get_runtime_cmd(platform_info.container_runtime)
        container_name = arguments.get("container_name", settings.container_name)
    
        if not await _is_container_exists(container_name, platform_info.container_runtime):
            return [TextContent(
                type="text",
                text=f"❌ Container '{container_name}' does not exist.\n"
                     f"Use `start_vllm` to create a new container."
            )]
    
        exit_code, _, stderr = await _run_command([runtime_cmd, "restart", container_name])
        
        if exit_code != 0:
            return [TextContent(
                type="text",
                text=f"❌ Failed to restart container: {stderr}"
            )]
    
        return [TextContent(
            type="text",
            text=f"✅ Container '{container_name}' restarted.\n\n"
                 f"⏳ The model may take a minute to reload. Use `vllm_status` to check."
        )]
  • Registration of the restart_vllm tool with its input schema (container_name optional string parameter) and description.
        name="restart_vllm",
        description="Restart a vLLM Docker container",
        inputSchema={
            "type": "object",
            "properties": {
                "container_name": {
                    "type": "string",
                    "description": "Name of the container to restart",
                },
            },
        },
    ),
  • Routing logic that maps the 'restart_vllm' tool name to the handler function call.
    elif name == "restart_vllm":
        return await restart_vllm(arguments)
  • Import statement that imports restart_vllm from the server_control module.
    from vllm_mcp_server.tools.server_control import (
        get_platform_status,
        get_vllm_logs,
        list_vllm_containers,
        restart_vllm,
        start_vllm,
        stop_vllm,
  • Export of restart_vllm from the tools package's public API.
    restart_vllm,

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/micytao/vllm-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server