# MCP ComfyUI Flux - Optimized Docker Solution
[](https://opensource.org/licenses/MIT)
[](https://www.docker.com/)
[](https://pytorch.org/)
[](https://developer.nvidia.com/cuda-toolkit)
A fully containerized MCP (Model Context Protocol) server for generating images with FLUX models via ComfyUI. Features optimized Docker builds, PyTorch 2.5.1, automatic GPU acceleration, and Claude Desktop integration.
## š Features
- **š Optimized Performance**: PyTorch 2.5.1 with native RMSNorm support
- **š¦ Efficient Images**: 25% smaller Docker images (10.9GB vs 14.6GB)
- **ā” Fast Rebuilds**: BuildKit cache mounts for rapid iterations
- **šØ FLUX Models**: Supports schnell (4-step) and dev models with fp8 quantization
- **š¤ MCP Integration**: Works seamlessly with Claude Desktop
- **šŖ GPU Acceleration**: Automatic NVIDIA GPU detection and CUDA 12.1
- **š Background Removal**: Built-in RMBG-2.0 for transparent backgrounds
- **š Image Upscaling**: 4x upscaling with UltraSharp/AnimeSharp models
- **š”ļø Production Ready**: Health checks, auto-recovery, extensive logging
## š Table of Contents
- [Quick Start](#-quick-start)
- [System Requirements](#-system-requirements)
- [Installation](#-installation)
- [MCP Tools](#-mcp-tools)
- [Docker Management](#-docker-management)
- [Advanced Features](#-advanced-features)
- [Troubleshooting](#-troubleshooting)
- [Architecture](#ļø-architecture)
## š Quick Start
```bash
# Clone the repository
git clone <repository-url> mcp-comfyui-flux
cd mcp-comfyui-flux
# Run the automated installer
./install.sh
# Or build manually with the optimized build script
./build.sh --start
# That's it! The installer will:
# - Check prerequisites
# - Configure environment
# - Download FLUX models
# - Build optimized Docker containers
# - Start all services
```
## š» System Requirements
### Minimum Requirements
- **OS**: Linux, macOS, Windows 10+ (WSL2)
- **CPU**: 4 cores
- **RAM**: 16GB (20GB for WSL2)
- **Storage**: 50GB free space
- **Docker**: 20.10+
- **Docker Compose**: 2.0+ or 1.29+ (legacy)
### Recommended Requirements
- **CPU**: 8+ cores
- **RAM**: 32GB
- **GPU**: NVIDIA RTX 3090/4090 (12GB+ VRAM)
- **Storage**: 100GB free space
- **CUDA**: 12.1+ with NVIDIA Container Toolkit
### WSL2 Specific (Windows)
```powershell
# .wslconfig in Windows user directory
[wsl2]
memory=20GB
processors=8
localhostForwarding=true
```
## š¦ Installation
### Prerequisites
1. **Install Docker**:
```bash
# Ubuntu/Debian
curl -fsSL https://get.docker.com | bash
# macOS
brew install docker docker-compose
# Windows - Install Docker Desktop
```
2. **Install NVIDIA Container Toolkit** (for GPU):
```bash
# Ubuntu/Debian
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
### Automated Installation
```bash
# Standard installation
./install.sh
# Non-interactive installation
./install.sh --yes
# CPU-only mode
./install.sh --cpu-only
# With specific models
./install.sh --models minimal # or all/none/auto
# Debug mode
./install.sh --debug
```
### Build Script Options
```bash
# Build only
./build.sh
# Build and start
./build.sh --start
# Build with cleanup
./build.sh --start --cleanup
# Rebuild without cache
./build.sh --no-cache
```
## šØ MCP Tools
### Available Tools in Claude Desktop
#### 1. **generate_image**
Generate images using FLUX schnell fp8 model (optimized defaults).
```javascript
// Parameters
{
"prompt": "a majestic mountain landscape, golden hour", // Required
"negative_prompt": "blurry, low quality", // Optional
"width": 1024, // Default: 1024
"height": 1024, // Default: 1024
"steps": 4, // Default: 4 (schnell optimized)
"cfg_scale": 1.0, // Default: 1.0 (schnell optimized)
"seed": -1, // Default: -1 (random)
"batch_size": 1 // Default: 1 (max: 8)
}
// Example usage
generate_image({
prompt: "cyberpunk city at night, neon lights, detailed",
steps: 4,
seed: 42
})
```
#### 2. **upscale_image**
Upscale images to 4x resolution using AI models.
```javascript
// Parameters
{
"image_path": "flux_output_00001_.png", // Required
"model": "ultrasharp", // Options: "ultrasharp", "animesharp"
"scale_factor": 1.0, // Additional scaling (0.5-2.0)
"content_type": "general" // Auto-select model based on content
}
// Example usage
upscale_image({
image_path: "output/my_image.png",
model: "ultrasharp"
})
```
#### 3. **remove_background**
Remove background using RMBG-2.0 AI model.
```javascript
// Parameters
{
"image_path": "output/image.png", // Required
"alpha_matting": true, // Better edge quality (default: true)
"output_format": "png" // Options: "png", "webp"
}
// Example usage
remove_background({
image_path: "flux_output_00001_.png"
})
```
#### 4. **check_models**
Verify available models in ComfyUI.
```javascript
// No parameters required
check_models()
```
#### 5. **connect_comfyui** / **disconnect_comfyui**
Manage ComfyUI connection (usually auto-connects).
### MCP Configuration
Add to Claude Desktop config (`%APPDATA%\Claude\claude_desktop_config.json` on Windows):
```json
{
"mcpServers": {
"comfyui-flux": {
"command": "wsl.exe",
"args": [
"bash", "-c",
"cd /path/to/mcp-comfyui-flux && docker exec -i mcp-comfyui-flux-mcp-server-1 node /app/src/index.js"
]
}
}
}
```
For macOS/Linux:
```json
{
"mcpServers": {
"comfyui-flux": {
"command": "docker",
"args": [
"exec", "-i", "mcp-comfyui-flux-mcp-server-1",
"node", "/app/src/index.js"
]
}
}
}
```
## š³ Docker Management
### Service Commands
```bash
# Start services
docker-compose -p mcp-comfyui-flux up -d
# Stop services
docker-compose -p mcp-comfyui-flux down
# View logs
docker-compose -p mcp-comfyui-flux logs -f
docker-compose -p mcp-comfyui-flux logs -f comfyui
# Check status
docker-compose -p mcp-comfyui-flux ps
# Restart services
docker-compose -p mcp-comfyui-flux restart
```
### Container Access
```bash
# Access ComfyUI container
docker exec -it mcp-comfyui-flux-comfyui-1 bash
# Access MCP server
docker exec -it mcp-comfyui-flux-mcp-server-1 sh
# Check GPU status
docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi
# Test PyTorch
docker exec mcp-comfyui-flux-comfyui-1 python3.11 -c "import torch; print(f'PyTorch {torch.__version__}')"
```
### Health Monitoring
```bash
# Full health check
./scripts/health-check.sh
# Check ComfyUI API
curl http://localhost:8188/system_stats
# Container health status
docker inspect mcp-comfyui-flux-comfyui-1 --format='{{.State.Health.Status}}'
```
## š Advanced Features
### Performance Optimizations
The optimized build includes:
- **PyTorch 2.5.1**: Latest stable with native RMSNorm support
- **BuildKit Cache Mounts**: Reduces I/O operations in WSL2
- **FP8 Quantization**: FLUX schnell fp8 uses ~10GB VRAM (vs 24GB fp16)
- **Multi-stage Builds**: Separates build and runtime dependencies
- **Compiled Python**: Pre-compiled bytecode for faster startup
### FLUX Model Configurations
#### Schnell (Default - Fast)
- **Steps**: 4 (optimized for schnell)
- **CFG Scale**: 1.0 (works best with low guidance)
- **Scheduler**: simple
- **Generation Time**: ~2-4 seconds per image
- **VRAM Usage**: ~10GB base + 1GB per batch
#### Dev (High Quality)
- **Steps**: 20-50
- **CFG Scale**: 7.0
- **Scheduler**: normal/karras
- **Requires**: Hugging Face authentication
- **VRAM Usage**: ~12-16GB
### Batch Generation
Generate multiple images efficiently:
```javascript
generate_image({
prompt: "fantasy landscape",
batch_size: 4 // Generates 4 variations in parallel
})
```
### Custom Nodes
Included custom nodes:
- **ComfyUI-Manager**: Node management and updates
- **ComfyUI-KJNodes**: Advanced processing nodes
- **ComfyUI-RMBG**: Background removal (31 nodes)
## š§ Troubleshooting
### Common Issues
#### GPU Not Detected
```bash
# Verify NVIDIA driver
nvidia-smi
# Check Docker GPU support
docker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smi
# Ensure NVIDIA Container Toolkit is installed
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
#### Out of Memory
```bash
# Reduce batch size
batch_size: 1
# Use CPU mode (in .env)
CUDA_VISIBLE_DEVICES=-1
# Adjust PyTorch memory
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256
```
#### WSL2 Specific Issues
```bash
# If Docker/WSL2 crashes with I/O errors
# Avoid recursive chown on large directories
# Use the optimized Dockerfile which handles this
# Increase WSL2 memory in .wslconfig
memory=20GB
# Reset WSL2 if needed
wsl --shutdown
```
#### Port Conflicts
```bash
# Check what's using port 8188
lsof -i :8188 # macOS/Linux
netstat -ano | findstr :8188 # Windows
# Use different port
PORT=8189 docker-compose -p mcp-comfyui-flux up -d
```
### Log Locations
- Installation: `install.log`
- Docker builds: `docker-compose logs`
- ComfyUI: Inside container at `/app/ComfyUI/user/comfyui.log`
- MCP Server: `docker logs mcp-comfyui-flux-mcp-server-1`
## šļø Architecture
### System Overview
```
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Claude Desktop (MCP Client) ā
āāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā docker exec stdio
āāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā MCP Server Container ā
ā ⢠Node.js 20 Alpine (581MB) ā
ā ⢠MCP Protocol Implementation ā
ā ⢠Auto-connects to ComfyUI ā
āāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā WebSocket (port 8188)
āāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ComfyUI Container ā
ā ⢠Ubuntu 22.04 + CUDA 12.1 ā
ā ⢠Python 3.11 + PyTorch 2.5.1 ā
ā ⢠FLUX schnell fp8 (4.5GB) ā
ā ⢠Custom nodes (KJNodes, RMBG) ā
ā ⢠Optimized image size: 10.9GB ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
```
### Key Improvements
1. **Docker Optimization**
- Multi-stage builds reduce image size by 25%
- BuildKit cache mounts speed up rebuilds
- No Python venv (Docker IS the isolation)
2. **Model Configuration**
- FLUX schnell fp8: 4.5GB (vs 11GB fp16)
- T5-XXL fp8: 4.9GB text encoder
- CLIP-L: 235MB text encoder
- VAE: 320MB decoder
3. **Performance**
- 4-step generation in 2-4 seconds
- Batch processing up to 8 images
- Native RMSNorm in PyTorch 2.5.1
- High VRAM mode for 24GB+ GPUs
### Directory Structure
```
mcp-comfyui-flux/
āāā src/ # MCP server source
ā āāā index.js # MCP protocol handler
ā āāā comfyui-client.js # WebSocket client
ā āāā workflows/ # ComfyUI workflows
āāā models/ # Model storage
ā āāā unet/ # FLUX models (fp8)
ā āāā clip/ # Text encoders
ā āāā vae/ # VAE models
ā āāā upscale_models/ # Upscaling models
āāā output/ # Generated images
āāā scripts/ # Utility scripts
āāā docker-compose.yml # Service orchestration
āāā Dockerfile.comfyui # Optimized ComfyUI
āāā Dockerfile.mcp # MCP server
āāā requirements.txt # Python dependencies
āāā build.sh # Build script
āāā install.sh # Automated installer
```
## š Security
- **Local Execution**: All processing happens locally
- **No External APIs**: Except model downloads from Hugging Face
- **Container Isolation**: Services run in isolated containers
- **Non-root Execution**: Containers run as non-root user
- **Token Security**: Stored in `.env` (gitignored)
## š Additional Documentation
- [CLAUDE.md](CLAUDE.md) - Claude Code development guide
- [ARCHITECTURE.md](ARCHITECTURE.md) - Technical architecture details
- [API.md](API.md) - Complete MCP API reference
- [TROUBLESHOOTING.md](TROUBLESHOOTING.md) - Detailed troubleshooting
## š¤ Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Submit a pull request
## š License
MIT License - see [LICENSE](LICENSE) file for details.
## š Acknowledgments
- [ComfyUI](https://github.com/comfyanonymous/ComfyUI) - The workflow engine
- [Black Forest Labs](https://blackforestlabs.ai/) - FLUX model creators
- [Anthropic](https://www.anthropic.com/) - MCP protocol and Claude
- [NVIDIA](https://www.nvidia.com/) - CUDA and GPU support
---
Made with ā¤ļø for efficient AI image generation