Skip to main content
Glama

list_models

Retrieve available AI models from specific ComfyUI folders like checkpoints, LoRAs, or VAEs to identify files for workflow automation.

Instructions

List available models in a folder.

    Args:
        folder: Model folder name. Options:
            - checkpoints: Full model checkpoints
            - loras: LoRA fine-tuning files
            - vae: VAE decoders
            - embeddings: Text embeddings
            - controlnet: ControlNet models
            - upscale_models: Upscaling models
            - clip_vision: CLIP vision encoders

    Returns list of model filenames in the folder.
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
folderNoModel folder: checkpoints, loras, vae, embeddingscheckpoints

Implementation Reference

  • The handler function for the 'list_models' tool. It accepts a folder parameter (default 'checkpoints') and returns the list of models from ComfyUI's /models/{folder} endpoint, handling errors appropriately.
    def list_models(
        folder: str = Field(
            default="checkpoints",
            description="Model folder: checkpoints, loras, vae, embeddings",
        ),
        ctx: Context = None,
    ) -> list:
        """List available models in a folder.
    
        Args:
            folder: Model folder name. Options:
                - checkpoints: Full model checkpoints
                - loras: LoRA fine-tuning files
                - vae: VAE decoders
                - embeddings: Text embeddings
                - controlnet: ControlNet models
                - upscale_models: Upscaling models
                - clip_vision: CLIP vision encoders
    
        Returns list of model filenames in the folder.
        """
        if ctx:
            ctx.info(f"Listing models in: {folder}")
        try:
            return comfy_get(f"/models/{folder}")
        except HTTPError as e:
            if e.code == 404:
                return []
            return [f"Error: {e}"]
        except Exception as e:
            return [f"Error: {e}"]
  • Registers the discovery tools (including list_models) by calling register_discovery_tools(mcp) as part of all tools registration.
    from .discovery import register_discovery_tools
    from .execution import register_execution_tools
    from .system import register_system_tools
    from .workflow import register_workflow_tools
    
    __all__ = [
        "register_system_tools",
        "register_discovery_tools",
        "register_workflow_tools",
        "register_execution_tools",
    ]
    
    
    def register_all_tools(mcp):
        """Register all tools with the MCP server."""
        register_system_tools(mcp)
        register_discovery_tools(mcp)
  • Top-level registration call to register_all_tools(mcp), which includes the discovery tools containing list_models.
    from .tools import register_all_tools
    
    # Configure logging
    logging.basicConfig(
        level=logging.INFO,
        format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
    )
    logger = logging.getLogger(__name__)
    
    # Server instructions for Claude Code
    SERVER_INSTRUCTIONS = """
    ## ComfyUI MCP Server - Workflow Guide
    
    ### Workflow Formats (CRITICAL)
    - **API format**: `{"node_id": {"class_type": "...", "inputs": {...}}}` - For MCP execution
    - **UI format**: `{"nodes": [...], "links": [...], "version": ...}` - For ComfyUI editor only
    - **IMPORTANT**: Only API format can be executed. UI format will be rejected with an error.
    
    ### Creating Workflows (Step-by-Step)
    
    1. **CREATE** - Start empty or from template:
       ```
       wf = create_workflow()
       # Or: wf = get_workflow_template("fal-flux-dev")
       ```
    
    2. **DISCOVER** - Find nodes and parameters:
       ```
       list_nodes(filter="Luma")     # Find node names
       get_node_info("LumaImageToVideoNode")  # Get required inputs
       ```
    
    3. **BUILD** - Add nodes with connections:
       ```
       wf = add_node(wf, "1", "LoadImage", {"image": "input.jpg"})
       wf = add_node(wf, "2", "SomeNode", {
           "param": "value",
           "input_image": ["1", 0]  # Connect to node "1", output 0
       })
       ```
    
    4. **VALIDATE** - Check before saving:
       ```
       validation = validate_workflow(wf)
       # Check validation["valid"] and validation["errors"]
       ```
    
    5. **SAVE** - Choose format by purpose:
       ```
       save_workflow(wf, "name", format="api")  # → workflows-api/ (execution)
       save_workflow(wf, "name", format="ui")   # → workflows-ui/ (editor)
       ```
    
    ### Execution
    - `run_workflow("name.json", inputs={...})` - Run saved API workflow
    - `execute_workflow(wf, output_node_id="9")` - Run workflow dict directly
    - `generate_image("prompt")` - Simple interface with default workflow
    
    ### Common Errors
    - "UI format detected": Use API format for execution
    - "Unknown node type": Check with list_nodes()
    - "Missing required input": Check with get_node_info()
    
    ### Node Connections Format
    Connections are `["source_node_id", output_index]`:
    - `"image": ["1", 0]` connects to node "1", first output (index 0)
    """
    
    # Initialize MCP server with instructions
    mcp = FastMCP("Comfy MCP Server", instructions=SERVER_INSTRUCTIONS)
    
    # Register all tools
    register_all_tools(mcp)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/IO-AtelierTech/comfyui-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server