Community platform integration mentioned for user support and updates related to the MCP server
Supports containerized deployment of the MCP server services for easier setup and management
Integrates with Flux models for advanced image generation capabilities through ComfyUI workflows
Integrates with Ollama for local LLM support, allowing AI agents to use locally hosted models for controlling ComfyUI workflows and AIGC operations
Provides support for OpenAI models through the LiteLLM framework, enabling AI agents to leverage OpenAI's capabilities for multimodal content generation using ComfyUI workflows
Community platform integration mentioned for user support and updates related to the MCP server
![]()
https://github.com/user-attachments/assets/65422cef-96f9-44fe-a82b-6a124674c417
๐ Recent Updates
โ 2025-09-29: Added RunningHub cloud ComfyUI support, enabling workflow execution without local GPU and ComfyUI environment
โ 2025-09-03: Architecture refactoring from three services to unified application; added CLI tool support; published to PyPI
โ 2025-08-12: Integrated the LiteLLM framework, adding multi-model support for Gemini, DeepSeek, Claude, Qwen, and more
๐ Features
โ ๐ Full-modal Support: Supports TISV (Text, Image, Sound/Speech, Video) full-modal conversion and generation
โ ๐ Dual Execution Modes: Local ComfyUI self-hosted environment + RunningHub cloud ComfyUI service, users can flexibly choose based on their needs
โ ๐งฉ ComfyUI Ecosystem: Built on ComfyUI, inheriting all capabilities from the open ComfyUI ecosystem
โ ๐ง Zero-code Development: Defines and implements the Workflow-as-MCP Tool solution, enabling zero-code development and dynamic addition of new MCP Tools
โ ๐๏ธ MCP Server: Based on the MCP protocol, supporting integration with any MCP client (including but not limited to Cursor, Claude Desktop, etc.)
โ ๐ Web Interface: Developed based on the Chainlit framework, inheriting Chainlit's UI controls and supporting integration with more MCP Servers
โ ๐ฆ One-click Deployment: Supports PyPI installation, CLI commands, Docker and other deployment methods, ready to use out of the box
โ โ๏ธ Simplified Configuration: Uses environment variable configuration scheme, simple and intuitive configuration
โ ๐ค Multi-LLM Support: Supports multiple mainstream LLMs, including OpenAI, Ollama, Gemini, DeepSeek, Claude, Qwen, and more
๐ Project Architecture
Pixelle MCP adopts a unified architecture design, integrating MCP server, web interface, and file services into one application, providing:
๐ Web Interface: Chainlit-based chat interface supporting multimodal interaction
๐ MCP Endpoint: For external MCP clients (such as Cursor, Claude Desktop) to connect
๐ File Service: Handles file upload, download, and storage
๐ ๏ธ Workflow Engine: Supports both local ComfyUI and cloud ComfyUI (RunningHub) workflows, automatically converts workflows into MCP tools
![]()
๐โโ๏ธ Quick Start
Choose the deployment method that best suits your needs, from simple to complex:
๐ฏ Method 1: One-click Experience
๐ก Zero configuration startup, perfect for quick experience and testing
๐ Temporary Run
๐ View uvx CLI Reference โ
๐ฆ Persistent Installation
๐ View pip CLI Reference โ
After startup, it will automatically enter the configuration wizard to guide you through execution engine selection (ComfyUI/RunningHub) and LLM configuration.
๐ ๏ธ Method 2: Local Development Deployment
๐ก Supports custom workflows and secondary development
๐ฅ 1. Get Source Code
๐ 2. Start Service
๐ View Complete CLI Reference โ
๐ง 3. Add Custom Workflows (Optional)
โ ๏ธ Important: Make sure to test workflows in ComfyUI first to ensure they run properly, otherwise execution will fail.
๐ณ Method 3: Docker Deployment
๐ก Suitable for production environments and containerized deployment
๐ 1. Prepare Configuration
๐ 2. Start Container
๐ Access Services
Regardless of which method you use, after startup you can access via:
๐ Web Interface: http://localhost:9004
Default username and password are both๐ MCP Endpoint: http://localhost:9004/pixelle/mcp
For MCP clients like Cursor, Claude Desktop to connect
๐ก Port Configuration: Default port is 9004, can be customized via environment variable PORT=your_port.
โ๏ธ Initial Configuration
On first startup, the system will automatically detect configuration status:
๐ Execution Engine Selection: Choose between local ComfyUI or RunningHub cloud service
๐ค LLM Configuration: Configure at least one LLM provider (OpenAI, Ollama, etc.)
๐ Workflow Directory: System will automatically create necessary directory structure
๐ RunningHub Cloud Mode Advantages
โ Zero Hardware Requirements: No need for local GPU or high-performance hardware
โ No Environment Setup: No need to install and configure ComfyUI locally
โ Ready to Use: Register and get API key to start immediately
โ Stable Performance: Professional cloud infrastructure ensures stable execution
โ Auto Scaling: Automatically handles concurrent requests and resource allocation
๐ Local ComfyUI Mode Advantages
โ Full Control: Complete control over execution environment and model versions
โ Privacy Protection: All data processing happens locally, ensuring data privacy
โ Custom Models: Support for custom models and nodes not available in cloud
โ No Network Dependency: Can work offline without internet connection
โ Cost Control: No cloud service fees for high-frequency usage
๐ Need Help? Join community groups for support (see Community section below)
๐ ๏ธ Add Your Own MCP Tool
โก One workflow = One MCP Tool, supports two addition methods:
๐ Method 1: Local ComfyUI Workflow - Export API format workflow files ๐ Method 2: RunningHub Workflow ID - Use cloud workflow IDs directly
![]()
๐ฏ 1. Add the Simplest MCP Tool
๐ Build a workflow in ComfyUI for image Gaussian blur (Get it here), then set the
LoadImagenode's title to$image.image!as shown below:
๐ค Export it as an API format file and rename it to
i_blur.json. You can export it yourself or use our pre-exported version (Get it here)๐ Copy the exported API workflow file (must be API format), input it on the web page, and let the LLM add this Tool

โจ After sending, the LLM will automatically convert this workflow into an MCP Tool

๐จ Now, refresh the page and send any image to perform Gaussian blur processing via LLM

๐ 2. Add a Complex MCP Tool
The steps are the same as above, only the workflow part differs (Download workflow: UI format and API format)
Note: When using RunningHub, you only need to input the corresponding workflow ID, no need to download and upload workflow files.
![]()
๐ง ComfyUI Workflow Custom Specification
๐จ Workflow Format
The system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.
๐ Parameter Definition Specification
In the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:
๐ Syntax Explanation:
param_name: The parameter name for the generated MCP tool function~: Optional, indicates URL parameter upload processing, returns relative pathfield_name: The corresponding input field in the node!: Indicates this parameter is requireddescription: Description of the parameter
๐ก Example:
Required parameter example:
Set LoadImage node title to:
$image.image!:Input image URLMeaning: Creates a required parameter named
image, mapped to the node'simagefield
URL upload processing example:
Set any node title to:
$image.~image!:Input image URLMeaning: Creates a required parameter named
image, system will automatically download URL and upload to ComfyUI, returns relative path
๐ Note:
LoadImage,VHS_LoadAudioUpload,VHS_LoadVideoand other nodes have built-in functionality, no need to add~marker
Optional parameter example:
Set EmptyLatentImage node title to:
$width.width:Image width, default 512Meaning: Creates an optional parameter named
width, mapped to the node'swidthfield, default value is 512
๐ฏ Type Inference Rules
The system automatically infers parameter types based on the current value of the node field:
๐ข
int: Integer values (e.g. 512, 1024)๐
float: Floating-point values (e.g. 1.5, 3.14)โ
bool: Boolean values (e.g. true, false)๐
str: String values (default type)
๐ค Output Definition Specification
๐ค Method 1: Auto-detect Output Nodes
The system will automatically detect the following common output nodes:
๐ผ๏ธ
SaveImage- Image save node๐ฌ
SaveVideo- Video save node๐
SaveAudio- Audio save node๐น
VHS_SaveVideo- VHS video save node๐ต
VHS_SaveAudio- VHS audio save node
๐ฏ Method 2: Manual Output Marking
Usually used for multiple outputs Use
$output.var_namein any node title to mark output:
Set node title to:
$output.resultThe system will use this node's output as the tool's return value
๐ Tool Description Configuration (Optional)
You can add a node titled MCP in the workflow to provide a tool description:
Add a
String (Multiline)or similar text node (must have a single string property, and the node field should be one of: value, text, string)Set the node title to:
MCPEnter a detailed tool description in the value field
โ ๏ธ Important Notes
๐ Parameter Validation: Optional parameters (without !) must have default values set in the node
๐ Node Connections: Fields already connected to other nodes will not be parsed as parameters
๐ท๏ธ Tool Naming: Exported file name will be used as the tool name, use meaningful English names
๐ Detailed Descriptions: Provide detailed parameter descriptions for better user experience
๐ฏ Export Format: Must export as API format, do not export as UI format
๐ฌ Community
Scan the QR codes below to join our communities for latest updates and technical support:
Discord Community  | WeChat Group  | 
๐ค How to Contribute
We welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:
๐ Report Issues
๐ Submit bug reports on the Issues page
๐ Please search for similar issues before submitting
๐ Describe the reproduction steps and environment in detail
๐ก Feature Suggestions
๐ Submit feature requests in Issues
๐ญ Describe the feature you want and its use case
๐ฏ Explain how it improves user experience
๐ง Code Contributions
๐ Contribution Process
๐ด Fork this repo to your GitHub account
๐ฟ Create a feature branch:
git checkout -b feature/your-feature-name๐ป Develop and add corresponding tests
๐ Commit changes:
git commit -m "feat: add your feature"๐ค Push to your repo:
git push origin feature/your-feature-name๐ Create a Pull Request to the main repo
๐จ Code Style
๐ Python code follows PEP 8 style guide
๐ Add appropriate documentation and comments for new features
๐งฉ Contribute Workflows
๐ฆ Share your ComfyUI workflows with the community
๐ ๏ธ Submit tested workflow files
๐ Add usage instructions and examples for workflows
๐ Acknowledgements
โค๏ธ Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.
License
This project is released under the MIT License (LICENSE, SPDX-License-identifier: MIT).
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
The AIGC solution based on the MCP protocol seamlessly transforms the ComfyUI workflow into an MCP Tool with zero code, enabling LLMS and ComfyUI to join forces and focus on image generation tasks. qwen-image, flux, and FLUx-KREA can be used for image generation.
Related MCP Servers
- Asecurity-licenseAqualityEnables the generation of images using Together AI's models through an MCP server, supporting customizable parameters such as model selection, image dimensions, and output directory.Last updated -7MIT License
 - -security-license-qualityThe Comfy MCP Server uses the FastMCP framework to generate images from prompts by interacting with a remote Comfy server, allowing automated image creation based on workflow configurations.Last updated -31MIT License
 - Asecurity-licenseAqualityRun AI workflows hosted on Glif.app via MCP, including ComfyUI-based image generators, meme generators, selfies, chained LLM calls, and moreLast updated -3126MIT License
 - Asecurity-licenseAqualityA server that integrates ComfyUI with MCP, allowing users to generate images and download them through natural language interactions.Last updated -49Apache 2.0