# Getting Started with MCP ComfyUI Flux - Complete Testing Guide
## ๐ Quick Start (5 Minutes)
### Step 1: Prerequisites Check
```bash
# Check Docker
docker --version # Need 20.10+
# Check Docker Compose (plugin or legacy)
docker compose version # Plugin (v2.0+)
# OR
docker-compose --version # Legacy (v1.29+)
# Check GPU (optional)
nvidia-smi # For GPU acceleration
# Check disk space
df -h # Need 50GB+ free (optimized build uses ~11GB)
```
### Step 2: Clone and Install
```bash
# Clone the repository
git clone <repository-url> mcp-comfyui-flux
cd mcp-comfyui-flux
# Run the one-command installer (interactive)
./install.sh
# Or run with options for automation
./install.sh --yes --port 8189 # Non-interactive, custom port
./install.sh --cpu-only --models=minimal # CPU mode, minimal models
./install.sh --debug # Verbose logging for troubleshooting
```
#### Installation Options
| Option | Description |
|--------|-------------|
| `--yes`, `--non-interactive` | Auto-confirm all prompts (CI-friendly) |
| `--cpu-only` | Force CPU-only mode (skip GPU checks) |
| `--port N` | Use custom port (default: 8188) |
| `--models {auto\|minimal\|all\|none}` | Control model downloads |
| `--project-name NAME` | Custom Docker project name |
| `--debug` | Enable verbose logging |
The installer will automatically:
1. โ
Detect Docker Compose version (plugin or legacy)
2. โ
Check system requirements (with OS-specific guidance)
3. โ
Configure environment (.env file)
4. โ
Detect GPU and configure accordingly
5. โ
Download models based on selection
6. โ
Build optimized Docker containers with PyTorch 2.5.1
7. โ
Start all services with health checks
8. โ
Setup Claude Code (optional)
### Step 3: Verify Installation
```bash
# Run health check
./scripts/health-check.sh
# You should see:
# โ Docker installed
# โ ComfyUI container running (PyTorch 2.5.1)
# โ MCP server container running
# โ Network connectivity
# โ Models found
# โ Port 8188 accessible (or your custom port)
```
## ๐ฏ What's New in the Optimized Version
- **PyTorch 2.5.1**: Latest stable version with native RMSNorm support
- **47% smaller image**: Reduced from 14.6GB to 10.9GB
- **BuildKit optimizations**: Faster rebuilds with cache mounts
- **All custom nodes included**: KJNodes, RMBG, ComfyUI-Manager
- **Fixed compatibility issues**: aiohttp/yarl versions aligned
- **Improved build script**: `build.sh` with progress tracking
## ๐งช Testing the System
### Test 1: Basic Container Test
```bash
# Check if services are running (works with both compose versions)
docker compose -p mcp-comfyui-flux ps # If using plugin
# OR
docker-compose -p mcp-comfyui-flux ps # If using legacy
# Expected output:
# NAME IMAGE STATUS PORTS
# mcp-comfyui-flux-comfyui-1 mcp-comfyui-flux-comfyui Up 0.0.0.0:8188->8188/tcp
# mcp-comfyui-flux-mcp-server-1 mcp-comfyui-flux-mcp-server Up
```
### Test 2: ComfyUI Web Interface
```bash
# If using custom port, replace 8188 with your port
PORT=${PORT:-8188}
# Open in browser
open http://localhost:${PORT} # macOS
xdg-open http://localhost:${PORT} # Linux
# Or test via curl
curl http://localhost:${PORT}/system_stats
# Expected: JSON with system statistics
```
### Test 3: Test Image Generation Directly
```bash
# Run the example script (using project name for consistency)
docker compose -p mcp-comfyui-flux exec mcp-server node /app/examples/example.js
# OR for legacy compose
docker-compose -p mcp-comfyui-flux exec mcp-server node /app/examples/example.js
# This will:
# 1. Connect to ComfyUI
# 2. Generate test images using FLUX schnell
# 3. Save to output/ directory
```
### Test 4: Test MCP Tools
```bash
# Test from inside container (using compose command)
docker compose -p mcp-comfyui-flux exec mcp-server node -e "
const { ComfyUIClient } = require('./src/comfyui-client.js');
const { getFluxDiffusersWorkflow } = require('./src/flux-workflow.js');
async function test() {
const client = new ComfyUIClient('comfyui:8188');
// Connect
await client.connect();
console.log('โ Connected to ComfyUI');
// Generate image with optimized FLUX schnell
const workflow = getFluxDiffusersWorkflow({
prompt: 'A beautiful sunset over mountains',
width: 1024,
height: 1024,
steps: 4 // FLUX schnell optimized for 4 steps
});
console.log('Generating test image...');
const images = await client.generateImage(workflow, '/app/output');
console.log('โ Image generated:', images[0]?.filename);
client.disconnect();
}
test().catch(console.error);
"
```
## ๐ค Testing with Claude Desktop
### Step 1: Configure Claude Desktop
Add to your Claude Desktop config file:
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
- **Linux**: `~/.config/claude/claude_desktop_config.json`
```json
{
"mcpServers": {
"comfyui-flux": {
"command": "docker",
"args": [
"compose", "-p", "mcp-comfyui-flux",
"exec", "-i", "mcp-server",
"node", "/app/src/index.js"
]
}
}
}
```
**Note**: If using a custom project name, replace `mcp-comfyui` with your project name.
### Step 2: Restart Claude Desktop
- Quit Claude Desktop completely
- Start Claude Desktop again
- The MCP server should connect automatically
### Step 3: Test in Claude
Type in Claude:
> Can you connect to ComfyUI and check what models are available?
Claude will use the MCP tools to:
1. Connect to ComfyUI
2. Check available models
3. Report back
Then try:
> Generate an image of a futuristic city at night with neon lights
## ๐งช Complete Test Suite
### 1. Quick Smoke Test (1 minute)
```bash
# Run quick health check
./scripts/health-check.sh --quick
# Should show:
# โ Docker running
# โ Containers healthy
# โ Network OK
# โ Port accessible
```
### 2. Full System Test (5 minutes)
```bash
# Full health check
./scripts/health-check.sh
# Build and start with the optimized build script
./build.sh --start
# Test all components
docker compose -p mcp-comfyui-flux exec mcp-server npm test # If tests exist
```
### 3. GPU Test (if available)
```bash
# Check GPU in container
docker compose -p mcp-comfyui-flux exec comfyui nvidia-smi
# Test GPU with PyTorch 2.5.1
docker compose -p mcp-comfyui-flux exec comfyui python -c "
import torch
print(f'PyTorch version: {torch.__version__}')
print(f'CUDA available: {torch.cuda.is_available()}')
if torch.cuda.is_available():
print(f'GPU: {torch.cuda.get_device_name(0)}')
print(f'Memory: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f}GB')
"
```
### 4. Model Test
```bash
# Verify FLUX schnell fp8 models are loaded
docker compose -p mcp-comfyui-flux exec comfyui ls -lh /app/ComfyUI/models/unet/
# Should show: flux1-schnell-fp8-e4m3fn.safetensors (~11GB)
# Check CLIP models
docker compose -p mcp-comfyui-flux exec comfyui ls -lh /app/ComfyUI/models/clip/
# Should show: clip_l.safetensors, t5xxl_fp8_e4m3fn_scaled.safetensors
# Check VAE
docker compose -p mcp-comfyui-flux exec comfyui ls -lh /app/ComfyUI/models/vae/
# Should show: ae.safetensors
```
### 5. Network Test
```bash
# Test internal networking
docker compose -p mcp-comfyui-flux exec mcp-server ping -c 1 comfyui
docker compose -p mcp-comfyui-flux exec mcp-server curl http://comfyui:8188/system_stats
```
### 6. Custom Nodes Test
```bash
# Verify custom nodes are loaded
docker compose -p mcp-comfyui-flux exec comfyui ls /app/ComfyUI/custom_nodes/
# Should show: ComfyUI-Manager, ComfyUI-KJNodes, ComfyUI-RMBG
# Check import times in logs
docker compose -p mcp-comfyui-flux logs comfyui | grep "Import times"
```
## ๐ Test Scenarios
### Scenario 1: Basic Image Generation (FLUX Schnell)
```bash
# Generate a simple image with optimized settings
docker compose -p mcp-comfyui-flux exec -it mcp-server node /app/examples/example.js
```
### Scenario 2: Batch Generation Test
```bash
# Generate multiple images efficiently
docker compose -p mcp-comfyui-flux exec mcp-server node -e "
const example = require('./examples/example.js');
// Generates 4 images in batch (much faster than sequential)
"
```
### Scenario 3: Upscaling Test
```bash
# Test image upscaling
docker compose -p mcp-comfyui-flux exec mcp-server node /app/examples/example-upscale.js
```
## ๐ Verify Results
### Check Generated Images
```bash
# List generated images
ls -la output/
# View image (macOS)
open output/*.png
# View image (Linux)
xdg-open output/*.png
# Copy images from container (if needed)
docker compose -p mcp-comfyui-flux cp mcp-server:/app/output ./generated_images
```
### Check Logs
```bash
# View ComfyUI logs
docker compose -p mcp-comfyui-flux logs -f comfyui
# View MCP server logs
docker compose -p mcp-comfyui-flux logs -f mcp-server
# Check for errors
docker compose -p mcp-comfyui-flux logs | grep -i error
```
## โก Quick Troubleshooting
### If containers won't start:
```bash
# Check logs
docker compose -p mcp-comfyui-flux logs
# Rebuild with optimized script
./build.sh --no-cache
docker compose -p mcp-comfyui-flux up -d
```
### If port is already in use:
```bash
# Use a different port
PORT=8189 ./install.sh --port 8189
# Or update .env
echo "PORT=8189" >> .env
docker compose -p mcp-comfyui-flux restart
```
### If GPU not detected:
```bash
# Force CPU mode
./install.sh --cpu-only
# Or manually set in .env
echo "CUDA_VISIBLE_DEVICES=-1" >> .env
docker compose -p mcp-comfyui-flux restart
```
### If models missing:
```bash
# Download models with options
./scripts/download-models.sh minimal # Minimal set (schnell)
./scripts/download-models.sh all # All models
# Or re-run installer
./install.sh --models=minimal
```
### If connection fails:
```bash
# Restart everything with project name
docker compose -p mcp-comfyui-flux down
docker compose -p mcp-comfyui-flux up -d
# Wait for services
sleep 30
# Retry health check
./scripts/health-check.sh
```
## ๐ฏ Expected Results
After successful setup and testing, you should have:
1. โ
**Running Containers**: Both comfyui and mcp-server services active
2. โ
**Web Interface**: Accessible at http://localhost:8188 (or custom port)
3. โ
**Generated Images**: PNG files in output/ directory
4. โ
**MCP Integration**: Working in Claude Desktop
5. โ
**Health Check**: All green checkmarks
6. โ
**PyTorch 2.5.1**: Latest version with optimizations
7. โ
**Custom Nodes**: KJNodes, RMBG, Manager all loaded
## ๐ Performance Expectations (Optimized)
| Hardware | Model | Image Size | Steps | Generation Time | VRAM Usage |
|----------|-------|------------|-------|-----------------|------------|
| RTX 4090 | FLUX schnell fp8 | 1024x1024 | 4 | ~2-4 seconds | ~10GB |
| RTX 3090 | FLUX schnell fp8 | 1024x1024 | 4 | ~4-6 seconds | ~10GB |
| RTX 3060 | FLUX schnell fp8 | 768x768 | 4 | ~8-10 seconds | ~10GB |
| CPU Only | FLUX schnell fp8 | 512x512 | 4 | ~3-5 minutes | System RAM |
**Note**: FLUX schnell is optimized for 4-step generation with cfg_scale=1.0
## ๐ Success Indicators
You know everything is working when:
- ๐ข Health check shows all green
- ๐ข You can access http://localhost:8188
- ๐ข PyTorch 2.5.1 is reported in logs
- ๐ข Custom nodes (KJNodes, RMBG) are loaded
- ๐ข Example script generates images in ~4 seconds
- ๐ข Claude Desktop can use the MCP tools
- ๐ข Images appear in the output/ folder
- ๐ข Container size is ~11GB (optimized)
## ๐ Next Steps
Once testing is successful:
1. **Try Claude Desktop Integration**: Ask Claude to generate images
2. **Use the build script**: `./build.sh --start` for rebuilds
3. **Customize Workflows**: Edit `src/flux-workflow.js`
4. **Add Custom Models**: Place in `models/` directory
5. **Optimize Performance**: Adjust settings in `.env`
6. **Explore ComfyUI**: Use the web interface at http://localhost:8188
7. **Batch Generation**: Use batch_size parameter for multiple images
### Need help?
```bash
# Check troubleshooting guide
cat TROUBLESHOOTING.md
# Run diagnostics with debug output
./scripts/health-check.sh --debug
# Check installation log
cat install.log
# Get help with installer options
./install.sh --help
# Get help with build script
./build.sh --help
```
## ๐ Security Notes
- Containers run with optimized non-root user (comfyuser)
- BuildKit cache mounts reduce I/O and improve security
- Environment variables are properly isolated
- Port exposure is configurable and can be restricted
- PyTorch 2.5.1 includes latest security patches
## ๐ Advanced Usage
### Using the Optimized Build Script
```bash
# Build only
./build.sh
# Build and start
./build.sh --start
# Build without cache
./build.sh --no-cache
# Build with cleanup
./build.sh --cleanup
```
### Running Multiple Instances
```bash
# Deploy instance 1
./install.sh --project-name flux1 --port 8188
# Deploy instance 2
./install.sh --project-name flux2 --port 8189
```
### CI/CD Integration
```bash
# Fully automated deployment
./install.sh --yes --cpu-only --models=minimal --port 8080
# Build with script
./build.sh --start --cleanup
```
### Custom Environment
```bash
# Set environment variables before install
export PORT=9000
export PROJECT_NAME=my-flux
export MODELS_MODE=all
./install.sh --yes
```
## ๐จ Example Prompts for Testing
Once everything is set up, try these prompts in Claude Desktop:
1. **Simple Test**: "Generate an image of a red cube on a white background"
2. **Detailed Scene**: "Create a cyberpunk street scene with neon signs and rain"
3. **Portrait**: "Generate a portrait of a wizard with a long beard and magical staff"
4. **Landscape**: "Create a sunset over mountain peaks with dramatic clouds"
5. **Batch Test**: "Generate 4 variations of a futuristic spaceship"
Each image should generate in 2-4 seconds with the optimized FLUX schnell fp8 model!