Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@ComfyUI MCP Servergenerate a cinematic portrait of a neon cyberpunk city at night"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
ComfyUI MCP Server
Generate and refine AI images/audio/video through natural conversation
A lightweight MCP (Model Context Protocol) server that lets AI agents generate and iteratively refine images, audio, and video using a local ComfyUI instance.
You run the server, connect a client, and issue tool calls. Everything else is optional depth.
Quick Start (2–3 minutes)
This proves everything is working.
1) Clone and set up
2) Start ComfyUI
Make sure ComfyUI is installed and running locally.
3) Run the MCP server
From the repository directory:
The server listens at:
4) Verify it works (no AI client required)
Run the included test client:
test_client.py will:
connect to the MCP server
list available tools
fetch and display server defaults (width, height, steps, model, etc.)
run
generate_imagewith your prompt (or a default)automatically use server defaults for all other parameters
print the resulting asset information
If this step succeeds, the system is working.
Note: The test client respects server defaults configured via config files, environment variables, or set_defaults calls. Only the prompt parameter is required; all other parameters use server defaults automatically.
That’s it.
Use with an AI Agent (Cursor / Claude / n8n)
Once the server is running, you can connect it to an AI client.
Create a project-scoped .mcp.json file:
Note for WSL Users: If you are running Cursor on Windows and the server inside WSL, use the streamable-http method above with the server running in your WSL terminal. Alternatively, for stdio mode, use wsl as the command:
Note: Some clients use "type": "http" instead of "streamable-http". Both work with this server. If auto-discovery doesn't work, try changing the type field.
Restart your AI client. You can now call tools such as:
generate_imageview_imageregenerateget_joblist_assets
This is the primary intended usage mode.
What You Can Do After It Works
Once you’ve confirmed the server runs and a client can connect, the system supports:
Iterative refinement via
regenerate(no re-prompting)Explicit asset identity for reliable follow-ups
Job polling and cancellation for long-running generations
Optional image injection into the AI’s context (
view_image)Auto-discovered ComfyUI workflows with parameter exposure
Configurable defaults to avoid repeating common settings
Everything below builds on the same basic loop you just tested.
Migration Notes (Previous Versions)
If you’ve used earlier versions of this project, a few things have changed.
What’s the Same
You still run a local MCP server that delegates execution to ComfyUI
Workflows are still JSON files placed in the
workflows/directoryImage generation behavior is unchanged at its core
What’s New
Streamable HTTP transport replaces the older WebSocket-based approach
Explicit job management (
get_job,get_queue_status,cancel_job)Asset identity instead of ad-hoc URLs (stable across hostname changes)
Iteration support via
regenerate(replay with parameter overrides)Optional visual feedback for agents via
view_imageConfigurable defaults to avoid repeating common parameters
What Changed Conceptually
Earlier versions were a thin request/response bridge. The current version is built around iteration and stateful control loops.
You can still generate an image with a single call, but you now have the option to:
refer back to specific outputs
refine results without re-specifying everything
poll and cancel long-running jobs
let AI agents inspect generated images directly
Looking for the Old Behavior?
If you want the minimal, single-shot behavior from earlier versions:
run
tests/clients/test_client.py(this mirrors the original usage pattern)call
generate_imagewith just a prompt (server defaults handle the rest)ignore the additional tools
No migration is required unless you want the new capabilities.
Available Tools
Generation Tools
generate_image: Generate images (requiresprompt)generate_song: Generate audio (requirestagsandlyrics)regenerate: Regenerate an existing asset with optional parameter overrides (requiresasset_id)
Viewing Tools
view_image: View generated images inline (images only, not audio/video)
Job Management Tools
get_queue_status: Check ComfyUI queue state (running/pending jobs) - provides async awarenessget_job: Poll job completion status by prompt_id - check if a job has finishedlist_assets: Browse recently generated assets - enables AI memory and iterationget_asset_metadata: Get full provenance and parameters for an asset - includes workflow historycancel_job: Cancel a queued or running job
Configuration Tools
list_models: List available ComfyUI modelsget_defaults: Get current default valuesset_defaults: Set default values (with optional persistence)
Node Introspection Tools
Purpose: Enable AI agents to understand, troubleshoot, and build ComfyUI workflows. When a user provides a workflow JSON, agents can use these tools to validate node configurations, discover compatible nodes, and intelligently extend or fix workflows.
list_available_nodes: List all ComfyUI nodes with categories and metadata - discover what's available in the environmentget_node_definition: Get detailed schema for a specific node (inputs, outputs, types) - validate and understand node requirementssearch_nodes: Search for nodes by name, category, or output type - find compatible nodes for workflow constructionget_node_inputs: Get input parameter specifications for a node - troubleshoot missing or incorrect parameters
Common Use Cases:
Troubleshooting: User provides broken workflow → agent uses
get_node_definitionto validate node inputs and identify issuesExtension: User wants to add upscaling → agent uses
search_nodes(output_type="IMAGE")to find compatible nodesBuilding: User describes desired workflow → agent uses
list_available_nodesandsearch_nodesto construct workflow from scratch
Workflow Tools
list_workflows: List all available workflowsrun_workflow: Run any workflow with custom parameters
Publish Tools
get_publish_info: Show publish status (detected project root, publish dir, ComfyUI output root, and any missing setup)set_comfyui_output_root: Set ComfyUI output directory (recommended for Comfy Desktop / nonstandard installs; persisted across restarts)publish_asset: Publish a generated asset into the project's web directory with deterministic compression (default 600KB)
Publish Notes:
Session-scoped:
asset_ids are valid only for the current server session; restart invalidates them.Zero-config in common cases: Publish dir auto-detected (
public/gen,static/gen, orassets/gen); if ComfyUI output can't be detected, set it once viaset_comfyui_output_root.Two modes: Demo (explicit filename) and Library (auto filename + manifest update). In library mode,
manifest_keyis required.Manifest: Updated only when
manifest_keyis provided.Compression: Deterministic ladder to meet size limits; fails with a clear error if it can't.
Quick Start:
Example agent conversation flow:
User: "Generate a hero image for my website and publish it as hero.webp"
Agent: Checks publish configuration
Calls
get_publish_info()→ sees status "ready"
Agent: Generates image
Calls
generate_image(prompt="a hero image for a website")→ getsasset_id
Agent: Publishes asset
Calls
publish_asset(asset_id="...", target_filename="hero.webp")→ success
User: "Now generate a logo and add it to the manifest as 'site-logo'"
Agent: Generates and publishes with manifest
Calls
generate_image(prompt="a modern logo")→ getsasset_idCalls
publish_asset(asset_id="...", manifest_key="site-logo")→ auto-generates filename, updates manifest
See docs/HOW_TO_TEST_PUBLISH.md for detailed usage and testing instructions.
Custom Workflows
Add custom workflows by placing JSON files in the workflows/ directory. Workflows are automatically discovered and exposed as MCP tools.
Workflow Placeholders
Use PARAM_* placeholders in workflow JSON to expose parameters:
PARAM_PROMPT→prompt: str(required)PARAM_INT_STEPS→steps: int(optional)PARAM_FLOAT_CFG→cfg: float(optional)
Example:
The tool name is derived from the filename (e.g., my_workflow.json → my_workflow tool).
Configuration
The server supports configurable defaults to avoid repeating common parameters. Defaults can be set via:
Runtime defaults: Use
set_defaultstool (ephemeral, lost on restart)Config file:
~/.config/comfy-mcp/config.json(persistent)Environment variables:
COMFY_MCP_DEFAULT_*prefixed variables
Defaults are resolved in priority order: per-call values → runtime defaults → config file → environment variables → hardcoded defaults.
For complete configuration details, see docs/REFERENCE.md.
Detailed Reference
Complete parameter lists, return schemas, configuration options, and advanced workflow metadata are documented in:
API Reference - Complete tool reference, parameters, return values, and configuration
Architecture - Design decisions and system overview
Project Structure
Notes
The server binds to localhost by default. Do not expose it publicly without authentication or a reverse proxy.
Ensure your models exist in
<ComfyUI_dir>/models/checkpoints/Server uses streamable-http transport (HTTP-based, not WebSocket)
Workflows are auto-discovered - no code changes needed
Assets expire after 24 hours (configurable)
view_imageonly supports images (PNG, JPEG, WebP, GIF)Asset identity uses
(filename, subfolder, type)instead of URL for robustnessFull workflow history is stored for provenance and reproducibility
regenerateuses stored workflow data to recreate assets with parameter overridesSession isolation:
list_assetscan filter by session for clean AI agent context
Troubleshooting
Server won't start:
Check ComfyUI is running on port 8188 (default)
Verify Python 3.8+ is installed (
python --version)Check all dependencies are installed:
pip install -r requirements.txtCheck server logs for specific error messages
Client can't connect:
Verify server shows "Server running at http://127.0.0.1:9000/mcp" in the console
Test server directly:
curl http://127.0.0.1:9000/mcp(should return MCP response)Check
.mcp.jsonis in project root (or correct location for your client)Try both
"type": "streamable-http"and"type": "http"- both are supportedFor Cursor-specific issues, see docs/MCP_CONFIG_README.md
Tools not appearing:
Check
workflows/directory has JSON files withPARAM_*placeholdersCheck server logs for workflow parsing errors
Verify ComfyUI has required custom nodes installed (if using custom workflows)
Restart the MCP server after adding new workflows
Asset not found errors:
Assets expire after 24 hours by default (configurable via
COMFY_MCP_ASSET_TTL_HOURS)Assets are lost on server restart (ephemeral by design)
Use
get_asset_metadatato verify asset exists before usingregenerateCheck server logs to see if asset was registered successfully
Known Limitations (v1.0)
Ephemeral asset registry:
asset_idreferences are only valid while the MCP server is running (and until TTL expiry). After restart, previously-issuedasset_ids can’t be resolved, and regenerate will fail for those assets.
Contributing
Issues and pull requests are welcome! See CONTRIBUTING.md for development guidelines.
Acknowledgements
@venetanji - streamable-http foundation & PARAM_* system
Maintainer
License
Apache License 2.0