Enables AI image generation using FLUX models (schnell and dev) via ComfyUI, with support for fp8 quantization, batch processing, 4x upscaling, and background removal.
Downloads FLUX models and related AI models from Hugging Face repositories for image generation workflows.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP ComfyUI Fluxgenerate a cyberpunk cityscape with neon lights"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP ComfyUI Flux - Optimized Docker Solution
A fully containerized MCP (Model Context Protocol) server for generating images with FLUX models via ComfyUI. Features optimized Docker builds, PyTorch 2.5.1, automatic GPU acceleration, and Claude Desktop integration.
π Features
π Optimized Performance: PyTorch 2.5.1 with native RMSNorm support
π¦ Efficient Images: 25% smaller Docker images (10.9GB vs 14.6GB)
β‘ Fast Rebuilds: BuildKit cache mounts for rapid iterations
π¨ FLUX Models: Supports schnell (4-step) and dev models with fp8 quantization
π€ MCP Integration: Works seamlessly with Claude Desktop
πͺ GPU Acceleration: Automatic NVIDIA GPU detection and CUDA 12.1
π Background Removal: Built-in RMBG-2.0 for transparent backgrounds
π Image Upscaling: 4x upscaling with UltraSharp/AnimeSharp models
π‘οΈ Production Ready: Health checks, auto-recovery, extensive logging
π Table of Contents
π Quick Start
# Clone the repository
git clone <repository-url> mcp-comfyui-flux
cd mcp-comfyui-flux
# Run the automated installer
./install.sh
# Or build manually with the optimized build script
./build.sh --start
# That's it! The installer will:
# - Check prerequisites
# - Configure environment
# - Download FLUX models
# - Build optimized Docker containers
# - Start all servicesπ» System Requirements
Minimum Requirements
OS: Linux, macOS, Windows 10+ (WSL2)
CPU: 4 cores
RAM: 16GB (20GB for WSL2)
Storage: 50GB free space
Docker: 20.10+
Docker Compose: 2.0+ or 1.29+ (legacy)
Recommended Requirements
CPU: 8+ cores
RAM: 32GB
GPU: NVIDIA RTX 3090/4090 (12GB+ VRAM)
Storage: 100GB free space
CUDA: 12.1+ with NVIDIA Container Toolkit
WSL2 Specific (Windows)
# .wslconfig in Windows user directory
[wsl2]
memory=20GB
processors=8
localhostForwarding=trueπ¦ Installation
Prerequisites
Install Docker:
# Ubuntu/Debian curl -fsSL https://get.docker.com | bash # macOS brew install docker docker-compose # Windows - Install Docker DesktopInstall NVIDIA Container Toolkit (for GPU):
# Ubuntu/Debian distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker
Automated Installation
# Standard installation
./install.sh
# Non-interactive installation
./install.sh --yes
# CPU-only mode
./install.sh --cpu-only
# With specific models
./install.sh --models minimal # or all/none/auto
# Debug mode
./install.sh --debugBuild Script Options
# Build only
./build.sh
# Build and start
./build.sh --start
# Build with cleanup
./build.sh --start --cleanup
# Rebuild without cache
./build.sh --no-cacheπ¨ MCP Tools
Available Tools in Claude Desktop
1. generate_image
Generate images using FLUX schnell fp8 model (optimized defaults).
// Parameters
{
"prompt": "a majestic mountain landscape, golden hour", // Required
"negative_prompt": "blurry, low quality", // Optional
"width": 1024, // Default: 1024
"height": 1024, // Default: 1024
"steps": 4, // Default: 4 (schnell optimized)
"cfg_scale": 1.0, // Default: 1.0 (schnell optimized)
"seed": -1, // Default: -1 (random)
"batch_size": 1 // Default: 1 (max: 8)
}
// Example usage
generate_image({
prompt: "cyberpunk city at night, neon lights, detailed",
steps: 4,
seed: 42
})2. upscale_image
Upscale images to 4x resolution using AI models.
// Parameters
{
"image_path": "flux_output_00001_.png", // Required
"model": "ultrasharp", // Options: "ultrasharp", "animesharp"
"scale_factor": 1.0, // Additional scaling (0.5-2.0)
"content_type": "general" // Auto-select model based on content
}
// Example usage
upscale_image({
image_path: "output/my_image.png",
model: "ultrasharp"
})3. remove_background
Remove background using RMBG-2.0 AI model.
// Parameters
{
"image_path": "output/image.png", // Required
"alpha_matting": true, // Better edge quality (default: true)
"output_format": "png" // Options: "png", "webp"
}
// Example usage
remove_background({
image_path: "flux_output_00001_.png"
})4. check_models
Verify available models in ComfyUI.
// No parameters required
check_models()5. connect_comfyui / disconnect_comfyui
Manage ComfyUI connection (usually auto-connects).
MCP Configuration
Add to Claude Desktop config (%APPDATA%\Claude\claude_desktop_config.json on Windows):
{
"mcpServers": {
"comfyui-flux": {
"command": "wsl.exe",
"args": [
"bash", "-c",
"cd /path/to/mcp-comfyui-flux && docker exec -i mcp-comfyui-flux-mcp-server-1 node /app/src/index.js"
]
}
}
}For macOS/Linux:
{
"mcpServers": {
"comfyui-flux": {
"command": "docker",
"args": [
"exec", "-i", "mcp-comfyui-flux-mcp-server-1",
"node", "/app/src/index.js"
]
}
}
}π³ Docker Management
Service Commands
# Start services
docker-compose -p mcp-comfyui-flux up -d
# Stop services
docker-compose -p mcp-comfyui-flux down
# View logs
docker-compose -p mcp-comfyui-flux logs -f
docker-compose -p mcp-comfyui-flux logs -f comfyui
# Check status
docker-compose -p mcp-comfyui-flux ps
# Restart services
docker-compose -p mcp-comfyui-flux restartContainer Access
# Access ComfyUI container
docker exec -it mcp-comfyui-flux-comfyui-1 bash
# Access MCP server
docker exec -it mcp-comfyui-flux-mcp-server-1 sh
# Check GPU status
docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi
# Test PyTorch
docker exec mcp-comfyui-flux-comfyui-1 python3.11 -c "import torch; print(f'PyTorch {torch.__version__}')"Health Monitoring
# Full health check
./scripts/health-check.sh
# Check ComfyUI API
curl http://localhost:8188/system_stats
# Container health status
docker inspect mcp-comfyui-flux-comfyui-1 --format='{{.State.Health.Status}}'π Advanced Features
Performance Optimizations
The optimized build includes:
PyTorch 2.5.1: Latest stable with native RMSNorm support
BuildKit Cache Mounts: Reduces I/O operations in WSL2
FP8 Quantization: FLUX schnell fp8 uses ~10GB VRAM (vs 24GB fp16)
Multi-stage Builds: Separates build and runtime dependencies
Compiled Python: Pre-compiled bytecode for faster startup
FLUX Model Configurations
Schnell (Default - Fast)
Steps: 4 (optimized for schnell)
CFG Scale: 1.0 (works best with low guidance)
Scheduler: simple
Generation Time: ~2-4 seconds per image
VRAM Usage: ~10GB base + 1GB per batch
Dev (High Quality)
Steps: 20-50
CFG Scale: 7.0
Scheduler: normal/karras
Requires: Hugging Face authentication
VRAM Usage: ~12-16GB
Batch Generation
Generate multiple images efficiently:
generate_image({
prompt: "fantasy landscape",
batch_size: 4 // Generates 4 variations in parallel
})Custom Nodes
Included custom nodes:
ComfyUI-Manager: Node management and updates
ComfyUI-KJNodes: Advanced processing nodes
ComfyUI-RMBG: Background removal (31 nodes)
π§ Troubleshooting
Common Issues
GPU Not Detected
# Verify NVIDIA driver
nvidia-smi
# Check Docker GPU support
docker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smi
# Ensure NVIDIA Container Toolkit is installed
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart dockerOut of Memory
# Reduce batch size
batch_size: 1
# Use CPU mode (in .env)
CUDA_VISIBLE_DEVICES=-1
# Adjust PyTorch memory
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256WSL2 Specific Issues
# If Docker/WSL2 crashes with I/O errors
# Avoid recursive chown on large directories
# Use the optimized Dockerfile which handles this
# Increase WSL2 memory in .wslconfig
memory=20GB
# Reset WSL2 if needed
wsl --shutdownPort Conflicts
# Check what's using port 8188
lsof -i :8188 # macOS/Linux
netstat -ano | findstr :8188 # Windows
# Use different port
PORT=8189 docker-compose -p mcp-comfyui-flux up -dLog Locations
Installation:
install.logDocker builds:
docker-compose logsComfyUI: Inside container at
/app/ComfyUI/user/comfyui.logMCP Server:
docker logs mcp-comfyui-flux-mcp-server-1
ποΈ Architecture
System Overview
βββββββββββββββββββββββββββββββββββββββββββ
β Claude Desktop (MCP Client) β
ββββββββββββββ¬βββββββββββββββββββββββββββββ
β docker exec stdio
ββββββββββββββΌβββββββββββββββββββββββββββββ
β MCP Server Container β
β β’ Node.js 20 Alpine (581MB) β
β β’ MCP Protocol Implementation β
β β’ Auto-connects to ComfyUI β
ββββββββββββββ¬βββββββββββββββββββββββββββββ
β WebSocket (port 8188)
ββββββββββββββΌβββββββββββββββββββββββββββββ
β ComfyUI Container β
β β’ Ubuntu 22.04 + CUDA 12.1 β
β β’ Python 3.11 + PyTorch 2.5.1 β
β β’ FLUX schnell fp8 (4.5GB) β
β β’ Custom nodes (KJNodes, RMBG) β
β β’ Optimized image size: 10.9GB β
βββββββββββββββββββββββββββββββββββββββββββKey Improvements
Docker Optimization
Multi-stage builds reduce image size by 25%
BuildKit cache mounts speed up rebuilds
No Python venv (Docker IS the isolation)
Model Configuration
FLUX schnell fp8: 4.5GB (vs 11GB fp16)
T5-XXL fp8: 4.9GB text encoder
CLIP-L: 235MB text encoder
VAE: 320MB decoder
Performance
4-step generation in 2-4 seconds
Batch processing up to 8 images
Native RMSNorm in PyTorch 2.5.1
High VRAM mode for 24GB+ GPUs
Directory Structure
mcp-comfyui-flux/
βββ src/ # MCP server source
β βββ index.js # MCP protocol handler
β βββ comfyui-client.js # WebSocket client
β βββ workflows/ # ComfyUI workflows
βββ models/ # Model storage
β βββ unet/ # FLUX models (fp8)
β βββ clip/ # Text encoders
β βββ vae/ # VAE models
β βββ upscale_models/ # Upscaling models
βββ output/ # Generated images
βββ scripts/ # Utility scripts
βββ docker-compose.yml # Service orchestration
βββ Dockerfile.comfyui # Optimized ComfyUI
βββ Dockerfile.mcp # MCP server
βββ requirements.txt # Python dependencies
βββ build.sh # Build script
βββ install.sh # Automated installerπ Security
Local Execution: All processing happens locally
No External APIs: Except model downloads from Hugging Face
Container Isolation: Services run in isolated containers
Non-root Execution: Containers run as non-root user
Token Security: Stored in
.env(gitignored)
π Additional Documentation
CLAUDE.md - Claude Code development guide
ARCHITECTURE.md - Technical architecture details
API.md - Complete MCP API reference
TROUBLESHOOTING.md - Detailed troubleshooting
π€ Contributing
Contributions welcome! Please:
Fork the repository
Create a feature branch
Make your changes
Submit a pull request
π License
MIT License - see LICENSE file for details.
π Acknowledgments
ComfyUI - The workflow engine
Black Forest Labs - FLUX model creators
Anthropic - MCP protocol and Claude
NVIDIA - CUDA and GPU support
Made with β€οΈ for efficient AI image generation