The ComfyUI MCP Server enables comprehensive automation and management of ComfyUI workflows through Claude and other MCP clients.
Image & Video Generation
Generate images using simple text prompts or execute saved workflow files with custom inputs
Submit workflows asynchronously with status tracking and result retrieval
Support for cloud GPU inference via fal.ai integration
Workflow Management
Create, load, modify, validate, and save workflows in both API and UI formats
Add, remove, and update nodes programmatically with automatic graph-based positioning
Access pre-built templates (Flux Dev, Flux Schnell) for common use cases
Convert between API format (execution) and UI format (editor)
Schema validation against ComfyUI v0.4 workflow schema
Discovery & Documentation
List and search available nodes, models (checkpoints, LoRAs, VAEs, embeddings, ControlNet, upscale models), and extensions
Get detailed node information including inputs, outputs, parameters, and types
Browse model folders and refresh node cache after installing new nodes
System Monitoring & Control
Check server health, version, memory usage, GPU info, and queue status
View generation history with configurable limits
Cancel running jobs (all or specific prompt IDs) and clear pending queue items
Integration Features
Compatible with ComfyUI 0.3.x - 0.4.x and custom node extensions
Supports both local and remote ComfyUI servers
File and URL-based output modes for flexibility
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@ComfyUI MCP Servergenerate an image of a futuristic city at sunset"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Comfy MCP Server
MCP server for comprehensive ComfyUI workflow automation, management, and image generation.
Overview
This server provides Claude Code (and other MCP clients) with full access to ComfyUI capabilities:
System monitoring: Check server health, queue status, and history
Workflow execution: Run saved workflows or execute custom workflow dicts
Workflow management: Create, modify, save, and validate workflows
Node discovery: List available nodes, models, and their parameters
Image generation: Simple prompt-based generation or full workflow control
Key Features
β Schema-Validated Workflows
All generated workflows are validated against the ComfyUI v0.4 workflow schema, ensuring compatibility and correct execution. No more invalid workflow errors.
π¨ Beautiful Visual Layout
Programmatically generated workflows have coherent node positions with proper spacing and alignment. The built-in layout engine produces untangled, professional-looking workflows that are easy to read and modify in the ComfyUI editor.
Technical: Uses graph-based layout algorithms (NetworkX) to automatically position nodes, eliminating the typical spaghetti diagrams from programmatic workflow generation.
Prerequisites
uv - Python package manager
Running ComfyUI server (local or remote)
Workflow files exported from ComfyUI (API format)
Installation
Via PyPI (Recommended)
# Run directly with uvx
uvx comfyui-easy-mcp
# Or install globally
uv pip install comfyui-easy-mcpFrom Source
git clone https://github.com/IO-AtelierTech/comfyui-mcp.git
cd comfyui-mcp
uv syncConfiguration
Environment Variables
Variable | Required | Default | Description |
| No |
| ComfyUI server URL |
| No | Same as COMFY_URL | External URL for image retrieval |
| No | - | Directory for API format workflows (execution) |
| No | - | Directory for UI format workflows (editor) |
| No | - | Default workflow file for |
| No | - | Default prompt node ID for |
| No | - | Default output node ID |
| No |
| Output mode: |
| No |
| Max seconds to wait for workflow (1-300) |
| No |
| Seconds between status polls (0.1-10.0) |
Example Configuration
export COMFY_URL=http://localhost:8188
export COMFY_WORKFLOWS_DIR=/path/to/workflows-api # API format for execution
export COMFY_WORKFLOWS_UI_DIR=/path/to/workflows-ui # UI format for editor
export COMFY_WORKFLOW_JSON_FILE=/path/to/workflows-api/default.json
export PROMPT_NODE_ID=6
export OUTPUT_NODE_ID=9
export OUTPUT_MODE=fileClaude Desktop Config
{
"mcpServers": {
"ComfyUI": {
"command": "uvx",
"args": ["comfyui-easy-mcp"],
"env": {
"COMFY_URL": "http://localhost:8188",
"COMFY_WORKFLOWS_DIR": "/path/to/workflows-api",
"COMFY_WORKFLOWS_UI_DIR": "/path/to/workflows-ui",
"COMFY_WORKFLOW_JSON_FILE": "/path/to/workflows-api/default.json",
"PROMPT_NODE_ID": "6",
"OUTPUT_NODE_ID": "9"
}
}
}
}Or use the pre-configured .mcp.json from comfyui-template.
Available Tools
System Tools
Tool | Description |
| Get ComfyUI server health: version, memory, device info |
| Get current queue: running and pending jobs |
| Get recent generation history (1-100 entries) |
| Interrupt current generation |
| Clear queue or delete specific items |
Discovery Tools
Tool | Description |
| List available ComfyUI nodes (optional filter) |
| Get detailed node info: inputs, outputs, parameters |
| Search nodes by name, type, or category |
| List models in a folder |
| List available model folder types |
| List available embeddings |
| List loaded extensions (custom node packs) |
| Refresh cached node list from ComfyUI |
Workflow Management Tools
Tool | Description |
| List available workflow files |
| Load a workflow from file |
| Save workflow (format: "api" or "ui") |
| Create an empty workflow structure |
| Add a node to a workflow |
| Remove a node from a workflow |
| Update a node's input |
| Validate workflow structure and node types |
| Convert API format to UI/Litegraph format |
| List available workflow templates |
| Get a pre-built workflow template |
| Generate random funny workflow name |
Execution Tools
Tool | Description |
| Generate image using default workflow (simple interface) |
| Execute a saved workflow file |
| Execute an arbitrary workflow dict |
| Submit workflow without waiting (returns prompt_id) |
| Get status of a submitted prompt |
| Get result image from completed prompt |
Workflow Formats
ComfyUI uses two workflow formats. Understanding the difference is critical:
Format | Structure | Use Case | Directory |
API |
| MCP execution, automation |
|
UI |
| ComfyUI editor only |
|
IMPORTANT:
Only API format workflows can be executed via MCP (
run_workflow(),execute_workflow())UI format workflows are for loading/editing in the ComfyUI web editor
The MCP server will reject UI format workflows with an error if you try to execute them
Format Detection
The server automatically detects format by checking for "nodes" or "version" keys (UI format) vs "class_type" keys (API format).
Creating, Validating, and Saving Workflows
Recommended Workflow Process
# 1. CREATE: Start with empty workflow or template
wf = create_workflow()
# Or use a template:
wf = get_workflow_template("fal-flux-dev")
# 2. BUILD: Add nodes with explicit connections
wf = add_node(wf, "1", "LoadImage", {"image": "input.jpg"})
wf = add_node(wf, "2", "LumaImageToVideoNode", {
"prompt": "smooth camera motion",
"model": "ray-2",
"first_image": ["1", 0], # Connect to node "1", output index 0
"resolution": "1080p",
"duration": "5s"
})
wf = add_node(wf, "3", "SaveVideo_fal", {
"videos": ["2", 0],
"filename_prefix": "output"
})
# 3. VALIDATE: Check structure and node types before saving
validation = validate_workflow(wf)
# Returns: {"valid": true/false, "errors": [...], "warnings": [...]}
# 4. SAVE: Choose format based on purpose
# For MCP execution (API format):
save_workflow(wf, "my-workflow", format="api") # β workflows-api/my-workflow.json
# For ComfyUI editor (UI format):
save_workflow(wf, "my-workflow", format="ui") # β workflows-ui/my-workflow.jsonNode Connections
Connections use the format ["source_node_id", output_index]:
# Connect node "1" output slot 0 to this input
wf = add_node(wf, "2", "CLIPTextEncode", {
"text": "my prompt",
"clip": ["1", 0] # β Connection to node "1", first output
})Discovering Node Parameters
Before adding a node, check its required inputs:
# Find available nodes
nodes = list_nodes(filter="Luma") # β ["LumaImageToVideoNode", ...]
# Get node details
info = get_node_info("LumaImageToVideoNode")
# Returns:
# {
# "input": {
# "required": {
# "prompt": ["STRING", {...}],
# "model": [["ray-2", "ray-flash-2"], {...}],
# "first_image": ["IMAGE"],
# ...
# },
# "optional": {...}
# },
# "output": ["VIDEO"],
# ...
# }Validation Errors
Common validation issues:
Error | Cause | Fix |
| Node class doesn't exist | Use |
| Required input not provided | Check |
| Source node/output doesn't exist | Verify node IDs and output indices |
| Trying to execute UI format | Use API format or re-export from ComfyUI |
Usage Examples
Simple Image Generation
# Using default workflow configuration
result = generate_image("a cyberpunk city at sunset")Run a Named Workflow
# Execute saved workflow with custom inputs
result = run_workflow(
"flux-dev.json",
inputs={"6": {"text": "a forest landscape"}},
output_node_id="9"
)Build and Execute Custom Workflow
# 1. Create empty workflow
wf = create_workflow()
# 2. Add nodes
wf = add_node(wf, "1", "CheckpointLoaderSimple", {
"ckpt_name": "flux-dev.safetensors"
})
wf = add_node(wf, "2", "CLIPTextEncode", {
"text": "beautiful sunset over mountains",
"clip": ["1", 1] # Connect to checkpoint's CLIP output
})
wf = add_node(wf, "3", "KSampler", {
"model": ["1", 0],
"positive": ["2", 0],
# ... other inputs
})
# 3. Validate before execution
validation = validate_workflow(wf)
if not validation.get("valid"):
print(f"Errors: {validation.get('errors')}")
# 4. Execute
result = execute_workflow(wf, output_node_id="9")Async Workflow Submission
# Submit without waiting
submission = submit_workflow(workflow)
prompt_id = submission["prompt_id"]
# Check status later
status = get_prompt_status(prompt_id)
# Get result when ready
if status["completed"]:
image = get_result_image(prompt_id, "9")Discover Available Nodes
# List all fal.ai nodes
nodes = list_nodes(filter="fal")
# Get details about a specific node
info = get_node_info("RemoteCheckpointLoader_fal")Using fal.ai Connector
The ComfyUI-fal-Connector enables cloud GPU inference via fal.ai.
# 1. Verify fal.ai extension is loaded
extensions = list_extensions()
# Should include "ComfyUI-fal-Connector"
# 2. List available fal.ai nodes
fal_nodes = list_nodes(filter="fal")
# Returns: ["RemoteCheckpointLoader_fal", "StringInput_fal", "SaveImage_fal", ...]
# 3. Get node details
info = get_node_info("RemoteCheckpointLoader_fal")
# Shows available checkpoints: flux-dev, flux-schnell, sd3.5-large, etc.
# 4. Build a fal.ai workflow
wf = create_workflow()
wf = add_node(wf, "1", "RemoteCheckpointLoader_fal", {
"ckpt_name": "flux-dev"
})
wf = add_node(wf, "2", "StringInput_fal", {
"text": "a futuristic city at sunset, cyberpunk style"
})
wf = add_node(wf, "3", "CLIPTextEncode", {
"text": ["2", 0],
"clip": ["1", 1]
})
# ... add sampler, VAE decode, save image nodes
# 5. Execute on fal.ai cloud GPUs
result = execute_workflow(wf, output_node_id="9")Environment Setup:
export FAL_KEY=your-fal-api-keyArchitecture
comfy_mcp_server/
βββ __init__.py # Entry point, FastMCP server
βββ api.py # HTTP helpers, ComfyUI API functions
βββ models.py # Pydantic models for type safety
βββ settings.py # Configuration with pydantic-settings
βββ compat.py # Version compatibility layer
βββ tools/ # MCP tool implementations
βββ system.py # System monitoring tools
βββ discovery.py # Node/model discovery
βββ workflow.py # Workflow management
βββ execution.py # Workflow executionDevelopment
# Install dev dependencies
uv sync --all-extras
# Run linting
uv run ruff check src/
# Run tests (requires running ComfyUI)
uv run pytest tests/
# Format code
uv run ruff format src/ComfyUI API Coverage
Target: ComfyUI 0.3.x - 0.4.x
API Endpoint | Method | Description | Status |
| POST | Submit workflow for execution | β Implemented |
| GET | View running/pending jobs | β Implemented |
| POST | Clear queue / delete items | β Implemented |
| GET | View generation history | β Implemented |
| GET | Get specific job result | β Implemented |
| POST | Delete history entries | β οΈ Partial |
| POST | Stop current generation | β Implemented |
| GET | List all available nodes | β Implemented |
| GET | Get node parameters/inputs | β Implemented |
| GET | List model folders | β Implemented |
| GET | List models in folder | β Implemented |
| GET | Server health/resources | β Implemented |
| GET | Retrieve generated images | β Implemented |
| GET | List available embeddings | β Implemented |
| GET | List loaded extensions | β Implemented |
| POST | Upload image for workflows | β Not implemented |
| POST | Upload mask for inpainting | β Not implemented |
| POST | Free VRAM/memory | β Not implemented |
| * | User data management | β Not implemented |
Implementation Notes
Partial: History deletion is not fully exposed as a tool
Not implemented: Image/mask upload, memory management, and user data APIs are planned for future releases
Compatibility
MCP Server Version: 0.2.0
Minimum ComfyUI: 0.3.0
Maximum Tested ComfyUI: 0.4.x
Python: 3.10+
The server includes a compatibility layer that handles API differences between ComfyUI versions and provides graceful degradation.
License
MIT License - see LICENSE file.
Credits
Originally forked from @lalanikarim/comfyui-mcp.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.