# MCP ComfyUI Flux - Docker Setup Guide
A fully containerized MCP server for generating images with Flux models via ComfyUI. This optimized setup provides automatic environment configuration, GPU acceleration, model management, and seamless integration with MCP clients.
## š Quick Start
### One-Command Installation
```bash
# Clone the repository
git clone <your-repo-url> mcp-comfyui-flux
cd mcp-comfyui-flux
# Run the automated installer
./install.sh
```
The installer will:
- ā
Check prerequisites (Docker, Docker Compose)
- ā
Configure environment variables
- ā
Download and configure Flux models
- ā
Build optimized Docker containers with PyTorch 2.5.1
- ā
Start all services with BuildKit optimizations
- ā
Provide MCP client configuration
## š Prerequisites
### Required
- Docker (20.10+)
- Docker Compose (1.29+ legacy or 2.0+ plugin)
- 20GB+ RAM (WSL2) or 16GB+ (native Linux)
- 50GB+ free disk space
### Optional (for GPU acceleration)
- NVIDIA GPU with 12GB+ VRAM (24GB recommended)
- NVIDIA Driver (515+)
- NVIDIA Container Toolkit
## šļø Architecture
```
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā MCP Client (Claude) ā
āāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā stdio/docker exec
āāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā MCP Server Container ā
ā (Node.js 20, handles MCP protocol) ā
āāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā WebSocket/HTTP
āāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ComfyUI Container (Optimized) ā
ā (Python 3.11, PyTorch 2.5.1, CUDA 12.1)ā
ā (FLUX schnell fp8, KJNodes, RMBG) ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
```
## š Optimized Build Features
The current setup uses an optimized multi-stage Docker build that provides:
- **47% smaller image size** (10.9GB vs 14.6GB)
- **PyTorch 2.5.1** with native RMSNorm support
- **BuildKit cache mounts** for faster rebuilds
- **FP8 quantized models** for reduced VRAM usage
- **Pre-compiled Python bytecode** for faster startup
- **All custom nodes included** (ComfyUI-Manager, KJNodes, RMBG)
## š§ Manual Setup
### 1. Environment Configuration
```bash
# Copy environment template
cp .env.example .env
# Edit .env with your settings
nano .env
```
Key settings:
- `HF_TOKEN`: Your Hugging Face token (optional, for Flux.1-dev)
- `CUDA_VISIBLE_DEVICES`: GPU configuration (default: all)
- `COMFYUI_HOST`: Docker service name (keep as 'comfyui')
- `MODEL_PRECISION`: fp16 or fp8 (fp8 recommended for FLUX schnell)
### 2. Model Setup
```bash
# Interactive model download
./scripts/download-models.sh
```
Recommended models:
- **Flux.1-schnell fp8**: Fast 4-step generation, 11GB (recommended)
- **Flux.1-schnell fp16**: Standard quality, 23GB
- **Flux.1-dev**: Best quality, requires HF auth, 23GB
### 3. Build and Start Services
```bash
# Build optimized Docker images with BuildKit
./build.sh
# Or build and start immediately
./build.sh --start
# Or use docker-compose directly
docker-compose -p mcp-comfyui-flux build
docker-compose -p mcp-comfyui-flux up -d
```
### 4. Verify Installation
```bash
# Check container status
docker-compose -p mcp-comfyui-flux ps
# View logs
docker-compose -p mcp-comfyui-flux logs -f
# Test ComfyUI
curl http://localhost:8188/system_stats
# Check GPU (if available)
docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi
```
## š MCP Client Configuration
### For Claude Desktop
Add to your Claude Desktop configuration file:
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Linux**: `~/.config/claude/claude_desktop_config.json`
```json
{
"mcpServers": {
"comfyui-flux": {
"command": "docker",
"args": [
"exec", "-i", "mcp-comfyui-flux-mcp-server-1",
"node", "/app/src/index.js"
]
}
}
}
```
**For WSL2 users**, use this configuration instead:
```json
{
"mcpServers": {
"comfyui-flux": {
"command": "wsl.exe",
"args": [
"bash", "-c",
"cd /path/to/mcp-comfyui-flux && docker exec -i mcp-comfyui-flux-mcp-server-1 node /app/src/index.js"
]
}
}
}
```
### For Other MCP Clients
Direct connection when containers are running:
```json
{
"mcpServers": {
"comfyui-flux": {
"command": "node",
"args": ["/path/to/mcp-comfyui-flux/src/index.js"],
"env": {
"COMFYUI_HOST": "localhost",
"COMFYUI_PORT": "8188"
}
}
}
}
```
## š¤ Claude Code Integration
Both containers include Claude Code pre-installed for AI-assisted development.
### Setting Up Claude Code
1. **Add your API key to `.env`**:
```bash
# Get key from: https://console.anthropic.com
ANTHROPIC_API_KEY=your_api_key_here
```
2. **Run the setup script**:
```bash
./scripts/setup-claude-code.sh
```
3. **Use Claude Code in containers**:
```bash
# In MCP server container
docker exec -it mcp-comfyui-flux-mcp-server-1 claude
# In ComfyUI container
docker exec -it mcp-comfyui-flux-comfyui-1 claude
```
## šØ Available MCP Tools
Once connected, you'll have access to these tools:
1. **generate_image** - Create images with Flux models
2. **upscale_image** - Upscale images to 4K using AI models
3. **remove_background** - Remove backgrounds with RMBG-2.0
4. **connect_comfyui** - Manually connect to ComfyUI (auto-connects on start)
5. **check_models** - Verify installed models
6. **disconnect_comfyui** - Close connection
### Example Usage in Claude
```
"Generate an image of a cyberpunk city at night"
"Upscale that image to 4K resolution"
"Remove the background from the image"
```
## š³ Docker Commands
### Service Management
```bash
# Start services
docker-compose -p mcp-comfyui-flux up -d
# Stop services
docker-compose -p mcp-comfyui-flux down
# Restart services
docker-compose -p mcp-comfyui-flux restart
# View logs
docker-compose -p mcp-comfyui-flux logs -f [service-name]
# Execute commands in container
docker exec -it mcp-comfyui-flux-comfyui-1 bash
docker exec -it mcp-comfyui-flux-mcp-server-1 sh
```
### Building
```bash
# Build with optimizations
./build.sh
# Full rebuild without cache
./build.sh --no-cache
# Build and start
./build.sh --start
# Build with cleanup
./build.sh --cleanup
```
### Troubleshooting
```bash
# Check GPU availability
docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi
# Test ComfyUI connection
curl http://localhost:8188/system_stats
# Check container health
docker-compose -p mcp-comfyui-flux ps
docker inspect mcp-comfyui-flux-comfyui-1 --format='{{.State.Health.Status}}'
# View PyTorch version
docker exec mcp-comfyui-flux-comfyui-1 python3.11 -c "import torch; print(f'PyTorch {torch.__version__}')"
# Check custom nodes
docker exec mcp-comfyui-flux-comfyui-1 ls /app/ComfyUI/custom_nodes/
# Rebuild containers
docker-compose -p mcp-comfyui-flux build --no-cache
docker-compose -p mcp-comfyui-flux up -d --force-recreate
```
## š Directory Structure
```
mcp-comfyui-flux/
āāā models/ # Flux models (persisted)
ā āāā unet/ # Main Flux models (fp8/fp16)
ā āāā clip/ # Text encoders (CLIP-L, T5-XXL)
ā āāā vae/ # VAE models
ā āāā upscale_models/ # 4x upscaling models
ā āāā rmbg/ # Background removal models
āāā output/ # Generated images
āāā input/ # Input images for processing
āāā src/ # MCP server source
āāā scripts/ # Setup and utility scripts
āāā docker-compose.yml # Service orchestration
āāā Dockerfile.comfyui # Optimized ComfyUI container
āāā Dockerfile.mcp # MCP server container
āāā build.sh # Build script with optimizations
āāā install.sh # Automated installer
āāā .env # Configuration
```
## š Updating
### Update to Latest Version
```bash
# Pull latest code
git pull
# Rebuild with optimizations
./build.sh
# Restart services
docker-compose -p mcp-comfyui-flux up -d
```
### Update ComfyUI Only
```bash
# Pull latest ComfyUI changes
docker exec mcp-comfyui-flux-comfyui-1 bash -c "cd /app/ComfyUI && git pull"
# Restart container
docker-compose -p mcp-comfyui-flux restart comfyui
```
### Update Custom Nodes
```bash
# Update ComfyUI-Manager
docker exec mcp-comfyui-flux-comfyui-1 bash -c "cd /app/ComfyUI/custom_nodes/ComfyUI-Manager && git pull"
# Restart to apply changes
docker-compose -p mcp-comfyui-flux restart comfyui
```
## āļø Advanced Configuration
### GPU Memory Optimization
Edit `.env`:
```bash
# For limited VRAM (12GB GPUs)
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
MODEL_PRECISION=fp8 # Use fp8 models
# For high VRAM (24GB+ GPUs)
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:1024
MODEL_PRECISION=fp16
```
### WSL2 Optimization
Create/edit `.wslconfig` in Windows user directory:
```ini
[wsl2]
memory=20GB
processors=8
localhostForwarding=true
```
### CPU-Only Mode
```bash
# Set in .env
CUDA_VISIBLE_DEVICES=-1
# Or use installer
./install.sh --cpu-only
```
### Custom Models
Place additional models in:
- Checkpoints: `./models/checkpoints/`
- LoRA: `./models/loras/`
- VAE: `./models/vae/`
- Embeddings: `./models/embeddings/`
- ControlNet: `./models/controlnet/`
## š Common Issues
### WSL2 Docker Crashes
```bash
# If Docker/WSL2 crashes during build
# Check .backup/ for recovery steps
cat .backup/WSL_RECOVERY_STEPS.md
# Restart WSL2
wsl --shutdown
wsl
```
### "Cannot connect to Docker daemon"
```bash
# Start Docker service
sudo systemctl start docker
# Or on WSL2/Windows
# Start Docker Desktop
```
### "GPU not available"
```bash
# Install NVIDIA Container Toolkit
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
### "Out of memory"
- Use fp8 models instead of fp16
- Reduce image resolution (768x768 instead of 1024x1024)
- Reduce batch size to 1
- Enable attention slicing in ComfyUI settings
### "Port 8188 already in use"
```bash
# Find process using port
sudo lsof -i :8188
# Or change port in .env
COMFYUI_PORT=8189
```
### Health Check Errors
If you see "Host 'localhost:8188' cannot contain ':'" errors:
- These are harmless and don't affect functionality
- The service is still accessible and working
## š Performance Metrics
### FLUX Schnell FP8 (Optimized)
- **Generation Time**: 2-4 seconds per image
- **VRAM Usage**: ~10GB
- **Steps**: 4 (optimized for schnell)
- **Image Quality**: 95% of fp16 at 50% memory
### System Requirements
#### Minimum (CPU-only)
- CPU: 4 cores
- RAM: 16GB
- Storage: 30GB
#### Recommended (with GPU)
- CPU: 8 cores
- RAM: 20GB (WSL2) / 16GB (Linux)
- GPU: NVIDIA RTX 3060 12GB or better
- VRAM: 12GB minimum, 24GB optimal
- Storage: 100GB (for multiple models)
#### Optimal
- CPU: 12+ cores
- RAM: 32GB
- GPU: NVIDIA RTX 4090 24GB
- Storage: 200GB (all models + workspace)
## š Security Notes
- Runs entirely locally - no external API calls except model downloads
- Hugging Face token only used for gated model downloads
- All generated images stay on your local machine
- ComfyUI binds to localhost only by default
- Containers run as non-root user (UID 1000)
- BuildKit cache isolated per build
## š Monitoring
```bash
# Resource usage
docker stats
# Disk usage
docker system df
# Clean up unused resources
docker system prune -a
# Monitor GPU usage
watch -n 1 nvidia-smi
```
## š License
MIT
## š¤ Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Test with `./build.sh --no-cache`
4. Submit a PR with clear description
## š Support
For issues or questions:
1. Check the troubleshooting section above
2. Review logs: `docker-compose -p mcp-comfyui-flux logs`
3. Check container health: `docker-compose -p mcp-comfyui-flux ps`
4. Open an issue on GitHub with:
- System info (OS, Docker version, GPU)
- Error logs
- Steps to reproduce
## šÆ Tips for Best Results
1. **Use FP8 models** for faster generation and lower VRAM usage
2. **Keep containers running** for instant generation (startup takes 30-60s)
3. **Use BuildKit** (`DOCKER_BUILDKIT=1`) for faster builds
4. **Monitor VRAM** with `nvidia-smi` during generation
5. **Use seeds** for reproducible results
6. **Batch generation** is more efficient than sequential (use batch_size parameter)
---
Built with ā¤ļø for the AI art community