Skip to main content
Glama

run_workflow

Execute saved ComfyUI workflows to generate images with optional input overrides and output node selection.

Instructions

Execute a saved workflow file.

    Args:
        workflow_name: Workflow filename (e.g., 'flux-dev.json')
        inputs: Optional input overrides, e.g., {"6": {"text": "new prompt"}}
        output_node_id: Node ID to get output from (uses default if not set)

    Returns the generated image or error message.
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
workflow_nameYesWorkflow filename
inputsNoNode input overrides
output_node_idNoOutput node ID

Implementation Reference

  • The @mcp.tool()-decorated run_workflow function implementing the core tool logic: loads workflow from file, applies input overrides, validates format, determines output node, and delegates to _execute_workflow.
    @mcp.tool()
    def run_workflow(
        workflow_name: str = Field(description="Workflow filename"),
        inputs: dict = Field(default=None, description="Node input overrides"),
        output_node_id: str = Field(default=None, description="Output node ID"),
        ctx: Context = None,
    ):
        """Execute a saved workflow file.
    
        Args:
            workflow_name: Workflow filename (e.g., 'flux-dev.json')
            inputs: Optional input overrides, e.g., {"6": {"text": "new prompt"}}
            output_node_id: Node ID to get output from (uses default if not set)
    
        Returns the generated image or error message.
        """
        if not settings.workflows_dir:
            return "Error: COMFY_WORKFLOWS_DIR not configured"
    
        wf_path = Path(settings.workflows_dir) / workflow_name
        if not wf_path.exists():
            return f"Error: Workflow '{workflow_name}' not found"
    
        if ctx:
            ctx.info(f"Loading workflow: {workflow_name}")
    
        with open(wf_path) as f:
            workflow = json.load(f)
    
        # Check for UI format workflows
        if is_ui_format(workflow):
            return (
                f"Error: Workflow '{workflow_name}' is in UI format (has nodes/widgets_values). "
                "UI format uses positional arrays that can cause parameter misalignment errors. "
                "Please re-export the workflow from ComfyUI using 'Export (API Format)' or use "
                "convert_workflow_to_ui() to create a UI version from an API format workflow."
            )
    
        # Apply input overrides
        if inputs:
            for node_id, values in inputs.items():
                if node_id in workflow:
                    if isinstance(values, dict):
                        workflow[node_id]["inputs"].update(values)
                    else:
                        # Simple value - try to set text input
                        if "text" in workflow[node_id]["inputs"]:
                            workflow[node_id]["inputs"]["text"] = values
    
        out_node = output_node_id or settings.output_node_id
        if not out_node:
            return "Error: No output_node_id specified"
    
        return _execute_workflow(workflow, out_node, ctx)
  • Calls register_execution_tools(mcp) within register_all_tools, which executes the @mcp.tool() decorators to register run_workflow.
    register_execution_tools(mcp)
  • Top-level call to register_all_tools(mcp) that initiates the registration chain for all tools including run_workflow.
    register_all_tools(mcp)
  • Internal helper _execute_workflow that submits the prompt to ComfyUI API, polls for completion, handles output modes, and returns the generated Image.
    def _execute_workflow(workflow: dict, output_node_id: str, ctx: Context | None):
        """Internal function to execute workflow and return result."""
        # Submit workflow
        status, resp_data = comfy_post("/prompt", {"prompt": workflow})
    
        if status != 200:
            error_msg = resp_data.get("error", f"status {status}")
            return f"Failed to submit workflow: {error_msg}"
    
        prompt_id = resp_data.get("prompt_id")
        if not prompt_id:
            node_errors = resp_data.get("node_errors", {})
            if node_errors:
                return f"Workflow validation failed:\n{json.dumps(node_errors, indent=2)}"
            return "Failed to get prompt_id from response"
    
        if ctx:
            ctx.info(f"Submitted: {prompt_id}")
    
        # Poll callback for progress logging
        def on_poll(attempt: int, max_attempts: int):
            if ctx and attempt % 5 == 0:
                ctx.info(f"Waiting... ({attempt}/{max_attempts})")
    
        # Poll for result
        image_data = poll_for_result(prompt_id, output_node_id, on_poll=on_poll)
    
        if image_data:
            if ctx:
                ctx.info("Image generated successfully")
    
            if settings.output_mode.lower() == "url":
                # Return URL instead of image data
                history = comfy_get(f"/history/{prompt_id}")
                if prompt_id in history:
                    outputs = history[prompt_id].get("outputs", {})
                    if output_node_id in outputs:
                        images = outputs[output_node_id].get("images", [])
                        if images:
                            url_values = urllib.parse.urlencode(images[0])
                            return get_file_url(settings.comfy_url_external, url_values)
    
            return Image(data=image_data, format="png")
    
        return "Failed to generate image. Use get_queue_status() and get_history() to debug."
  • Helper function is_ui_format used by run_workflow to detect and reject UI-format workflows.
    def is_ui_format(workflow: dict) -> bool:
        """Detect if workflow is in UI format (has nodes/links) vs API format (has class_type/inputs)."""
        return "nodes" in workflow or "version" in workflow

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/IO-AtelierTech/comfyui-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server