Enables AI image generation using FLUX models (schnell and dev) via ComfyUI, with support for fp8 quantization, batch processing, 4x upscaling, and background removal.
Downloads FLUX models and related AI models from Hugging Face repositories for image generation workflows.
MCP ComfyUI Flux - Optimized Docker Solution
A fully containerized MCP (Model Context Protocol) server for generating images with FLUX models via ComfyUI. Features optimized Docker builds, PyTorch 2.5.1, automatic GPU acceleration, and Claude Desktop integration.
š Features
š Optimized Performance: PyTorch 2.5.1 with native RMSNorm support
š¦ Efficient Images: 25% smaller Docker images (10.9GB vs 14.6GB)
ā” Fast Rebuilds: BuildKit cache mounts for rapid iterations
šØ FLUX Models: Supports schnell (4-step) and dev models with fp8 quantization
š¤ MCP Integration: Works seamlessly with Claude Desktop
šŖ GPU Acceleration: Automatic NVIDIA GPU detection and CUDA 12.1
š Background Removal: Built-in RMBG-2.0 for transparent backgrounds
š Image Upscaling: 4x upscaling with UltraSharp/AnimeSharp models
š”ļø Production Ready: Health checks, auto-recovery, extensive logging
š Table of Contents
š Quick Start
š» System Requirements
Minimum Requirements
OS: Linux, macOS, Windows 10+ (WSL2)
CPU: 4 cores
RAM: 16GB (20GB for WSL2)
Storage: 50GB free space
Docker: 20.10+
Docker Compose: 2.0+ or 1.29+ (legacy)
Recommended Requirements
CPU: 8+ cores
RAM: 32GB
GPU: NVIDIA RTX 3090/4090 (12GB+ VRAM)
Storage: 100GB free space
CUDA: 12.1+ with NVIDIA Container Toolkit
WSL2 Specific (Windows)
š¦ Installation
Prerequisites
Install Docker:
# Ubuntu/Debian curl -fsSL https://get.docker.com | bash # macOS brew install docker docker-compose # Windows - Install Docker DesktopInstall NVIDIA Container Toolkit (for GPU):
# Ubuntu/Debian distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker
Automated Installation
Build Script Options
šØ MCP Tools
Available Tools in Claude Desktop
1. generate_image
Generate images using FLUX schnell fp8 model (optimized defaults).
2. upscale_image
Upscale images to 4x resolution using AI models.
3. remove_background
Remove background using RMBG-2.0 AI model.
4. check_models
Verify available models in ComfyUI.
5. connect_comfyui / disconnect_comfyui
Manage ComfyUI connection (usually auto-connects).
MCP Configuration
Add to Claude Desktop config (%APPDATA%\Claude\claude_desktop_config.json on Windows):
For macOS/Linux:
š³ Docker Management
Service Commands
Container Access
Health Monitoring
š Advanced Features
Performance Optimizations
The optimized build includes:
PyTorch 2.5.1: Latest stable with native RMSNorm support
BuildKit Cache Mounts: Reduces I/O operations in WSL2
FP8 Quantization: FLUX schnell fp8 uses ~10GB VRAM (vs 24GB fp16)
Multi-stage Builds: Separates build and runtime dependencies
Compiled Python: Pre-compiled bytecode for faster startup
FLUX Model Configurations
Schnell (Default - Fast)
Steps: 4 (optimized for schnell)
CFG Scale: 1.0 (works best with low guidance)
Scheduler: simple
Generation Time: ~2-4 seconds per image
VRAM Usage: ~10GB base + 1GB per batch
Dev (High Quality)
Steps: 20-50
CFG Scale: 7.0
Scheduler: normal/karras
Requires: Hugging Face authentication
VRAM Usage: ~12-16GB
Batch Generation
Generate multiple images efficiently:
Custom Nodes
Included custom nodes:
ComfyUI-Manager: Node management and updates
ComfyUI-KJNodes: Advanced processing nodes
ComfyUI-RMBG: Background removal (31 nodes)
š§ Troubleshooting
Common Issues
GPU Not Detected
Out of Memory
WSL2 Specific Issues
Port Conflicts
Log Locations
Installation:
install.logDocker builds:
docker-compose logsComfyUI: Inside container at
/app/ComfyUI/user/comfyui.logMCP Server:
docker logs mcp-comfyui-flux-mcp-server-1
šļø Architecture
System Overview
Key Improvements
Docker Optimization
Multi-stage builds reduce image size by 25%
BuildKit cache mounts speed up rebuilds
No Python venv (Docker IS the isolation)
Model Configuration
FLUX schnell fp8: 4.5GB (vs 11GB fp16)
T5-XXL fp8: 4.9GB text encoder
CLIP-L: 235MB text encoder
VAE: 320MB decoder
Performance
4-step generation in 2-4 seconds
Batch processing up to 8 images
Native RMSNorm in PyTorch 2.5.1
High VRAM mode for 24GB+ GPUs
Directory Structure
š Security
Local Execution: All processing happens locally
No External APIs: Except model downloads from Hugging Face
Container Isolation: Services run in isolated containers
Non-root Execution: Containers run as non-root user
Token Security: Stored in
.env(gitignored)
š Additional Documentation
CLAUDE.md - Claude Code development guide
ARCHITECTURE.md - Technical architecture details
API.md - Complete MCP API reference
TROUBLESHOOTING.md - Detailed troubleshooting
š¤ Contributing
Contributions welcome! Please:
Fork the repository
Create a feature branch
Make your changes
Submit a pull request
š License
MIT License - see LICENSE file for details.
š Acknowledgments
ComfyUI - The workflow engine
Black Forest Labs - FLUX model creators
Anthropic - MCP protocol and Claude
NVIDIA - CUDA and GPU support
Made with ā¤ļø for efficient AI image generation