Skip to main content
Glama

MCP ComfyUI Flux - Optimized Docker Solution

License: MIT Docker PyTorch CUDA

A fully containerized MCP (Model Context Protocol) server for generating images with FLUX models via ComfyUI. Features optimized Docker builds, PyTorch 2.5.1, automatic GPU acceleration, and Claude Desktop integration.

🌟 Features

  • šŸš€ Optimized Performance: PyTorch 2.5.1 with native RMSNorm support

  • šŸ“¦ Efficient Images: 25% smaller Docker images (10.9GB vs 14.6GB)

  • ⚔ Fast Rebuilds: BuildKit cache mounts for rapid iterations

  • šŸŽØ FLUX Models: Supports schnell (4-step) and dev models with fp8 quantization

  • šŸ¤– MCP Integration: Works seamlessly with Claude Desktop

  • šŸ’Ŗ GPU Acceleration: Automatic NVIDIA GPU detection and CUDA 12.1

  • šŸ”„ Background Removal: Built-in RMBG-2.0 for transparent backgrounds

  • šŸ“ˆ Image Upscaling: 4x upscaling with UltraSharp/AnimeSharp models

  • šŸ›”ļø Production Ready: Health checks, auto-recovery, extensive logging

šŸ“‹ Table of Contents

šŸš€ Quick Start

# Clone the repository git clone <repository-url> mcp-comfyui-flux cd mcp-comfyui-flux # Run the automated installer ./install.sh # Or build manually with the optimized build script ./build.sh --start # That's it! The installer will: # - Check prerequisites # - Configure environment # - Download FLUX models # - Build optimized Docker containers # - Start all services

šŸ’» System Requirements

Minimum Requirements

  • OS: Linux, macOS, Windows 10+ (WSL2)

  • CPU: 4 cores

  • RAM: 16GB (20GB for WSL2)

  • Storage: 50GB free space

  • Docker: 20.10+

  • Docker Compose: 2.0+ or 1.29+ (legacy)

  • CPU: 8+ cores

  • RAM: 32GB

  • GPU: NVIDIA RTX 3090/4090 (12GB+ VRAM)

  • Storage: 100GB free space

  • CUDA: 12.1+ with NVIDIA Container Toolkit

WSL2 Specific (Windows)

# .wslconfig in Windows user directory [wsl2] memory=20GB processors=8 localhostForwarding=true

šŸ“¦ Installation

Prerequisites

  1. Install Docker:

    # Ubuntu/Debian curl -fsSL https://get.docker.com | bash # macOS brew install docker docker-compose # Windows - Install Docker Desktop
  2. Install NVIDIA Container Toolkit (for GPU):

    # Ubuntu/Debian distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker

Automated Installation

# Standard installation ./install.sh # Non-interactive installation ./install.sh --yes # CPU-only mode ./install.sh --cpu-only # With specific models ./install.sh --models minimal # or all/none/auto # Debug mode ./install.sh --debug

Build Script Options

# Build only ./build.sh # Build and start ./build.sh --start # Build with cleanup ./build.sh --start --cleanup # Rebuild without cache ./build.sh --no-cache

šŸŽØ MCP Tools

Available Tools in Claude Desktop

1. generate_image

Generate images using FLUX schnell fp8 model (optimized defaults).

// Parameters { "prompt": "a majestic mountain landscape, golden hour", // Required "negative_prompt": "blurry, low quality", // Optional "width": 1024, // Default: 1024 "height": 1024, // Default: 1024 "steps": 4, // Default: 4 (schnell optimized) "cfg_scale": 1.0, // Default: 1.0 (schnell optimized) "seed": -1, // Default: -1 (random) "batch_size": 1 // Default: 1 (max: 8) } // Example usage generate_image({ prompt: "cyberpunk city at night, neon lights, detailed", steps: 4, seed: 42 })

2. upscale_image

Upscale images to 4x resolution using AI models.

// Parameters { "image_path": "flux_output_00001_.png", // Required "model": "ultrasharp", // Options: "ultrasharp", "animesharp" "scale_factor": 1.0, // Additional scaling (0.5-2.0) "content_type": "general" // Auto-select model based on content } // Example usage upscale_image({ image_path: "output/my_image.png", model: "ultrasharp" })

3. remove_background

Remove background using RMBG-2.0 AI model.

// Parameters { "image_path": "output/image.png", // Required "alpha_matting": true, // Better edge quality (default: true) "output_format": "png" // Options: "png", "webp" } // Example usage remove_background({ image_path: "flux_output_00001_.png" })

4. check_models

Verify available models in ComfyUI.

// No parameters required check_models()

5. connect_comfyui / disconnect_comfyui

Manage ComfyUI connection (usually auto-connects).

MCP Configuration

Add to Claude Desktop config (%APPDATA%\Claude\claude_desktop_config.json on Windows):

{ "mcpServers": { "comfyui-flux": { "command": "wsl.exe", "args": [ "bash", "-c", "cd /path/to/mcp-comfyui-flux && docker exec -i mcp-comfyui-flux-mcp-server-1 node /app/src/index.js" ] } } }

For macOS/Linux:

{ "mcpServers": { "comfyui-flux": { "command": "docker", "args": [ "exec", "-i", "mcp-comfyui-flux-mcp-server-1", "node", "/app/src/index.js" ] } } }

🐳 Docker Management

Service Commands

# Start services docker-compose -p mcp-comfyui-flux up -d # Stop services docker-compose -p mcp-comfyui-flux down # View logs docker-compose -p mcp-comfyui-flux logs -f docker-compose -p mcp-comfyui-flux logs -f comfyui # Check status docker-compose -p mcp-comfyui-flux ps # Restart services docker-compose -p mcp-comfyui-flux restart

Container Access

# Access ComfyUI container docker exec -it mcp-comfyui-flux-comfyui-1 bash # Access MCP server docker exec -it mcp-comfyui-flux-mcp-server-1 sh # Check GPU status docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi # Test PyTorch docker exec mcp-comfyui-flux-comfyui-1 python3.11 -c "import torch; print(f'PyTorch {torch.__version__}')"

Health Monitoring

# Full health check ./scripts/health-check.sh # Check ComfyUI API curl http://localhost:8188/system_stats # Container health status docker inspect mcp-comfyui-flux-comfyui-1 --format='{{.State.Health.Status}}'

šŸš€ Advanced Features

Performance Optimizations

The optimized build includes:

  • PyTorch 2.5.1: Latest stable with native RMSNorm support

  • BuildKit Cache Mounts: Reduces I/O operations in WSL2

  • FP8 Quantization: FLUX schnell fp8 uses ~10GB VRAM (vs 24GB fp16)

  • Multi-stage Builds: Separates build and runtime dependencies

  • Compiled Python: Pre-compiled bytecode for faster startup

FLUX Model Configurations

Schnell (Default - Fast)

  • Steps: 4 (optimized for schnell)

  • CFG Scale: 1.0 (works best with low guidance)

  • Scheduler: simple

  • Generation Time: ~2-4 seconds per image

  • VRAM Usage: ~10GB base + 1GB per batch

Dev (High Quality)

  • Steps: 20-50

  • CFG Scale: 7.0

  • Scheduler: normal/karras

  • Requires: Hugging Face authentication

  • VRAM Usage: ~12-16GB

Batch Generation

Generate multiple images efficiently:

generate_image({ prompt: "fantasy landscape", batch_size: 4 // Generates 4 variations in parallel })

Custom Nodes

Included custom nodes:

  • ComfyUI-Manager: Node management and updates

  • ComfyUI-KJNodes: Advanced processing nodes

  • ComfyUI-RMBG: Background removal (31 nodes)

šŸ”§ Troubleshooting

Common Issues

GPU Not Detected

# Verify NVIDIA driver nvidia-smi # Check Docker GPU support docker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smi # Ensure NVIDIA Container Toolkit is installed sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker

Out of Memory

# Reduce batch size batch_size: 1 # Use CPU mode (in .env) CUDA_VISIBLE_DEVICES=-1 # Adjust PyTorch memory PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256

WSL2 Specific Issues

# If Docker/WSL2 crashes with I/O errors # Avoid recursive chown on large directories # Use the optimized Dockerfile which handles this # Increase WSL2 memory in .wslconfig memory=20GB # Reset WSL2 if needed wsl --shutdown

Port Conflicts

# Check what's using port 8188 lsof -i :8188 # macOS/Linux netstat -ano | findstr :8188 # Windows # Use different port PORT=8189 docker-compose -p mcp-comfyui-flux up -d

Log Locations

  • Installation: install.log

  • Docker builds: docker-compose logs

  • ComfyUI: Inside container at /app/ComfyUI/user/comfyui.log

  • MCP Server: docker logs mcp-comfyui-flux-mcp-server-1

šŸ—ļø Architecture

System Overview

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ Claude Desktop (MCP Client) │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ docker exec stdio ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā–¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ MCP Server Container │ │ • Node.js 20 Alpine (581MB) │ │ • MCP Protocol Implementation │ │ • Auto-connects to ComfyUI │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ WebSocket (port 8188) ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā–¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ ComfyUI Container │ │ • Ubuntu 22.04 + CUDA 12.1 │ │ • Python 3.11 + PyTorch 2.5.1 │ │ • FLUX schnell fp8 (4.5GB) │ │ • Custom nodes (KJNodes, RMBG) │ │ • Optimized image size: 10.9GB │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

Key Improvements

  1. Docker Optimization

    • Multi-stage builds reduce image size by 25%

    • BuildKit cache mounts speed up rebuilds

    • No Python venv (Docker IS the isolation)

  2. Model Configuration

    • FLUX schnell fp8: 4.5GB (vs 11GB fp16)

    • T5-XXL fp8: 4.9GB text encoder

    • CLIP-L: 235MB text encoder

    • VAE: 320MB decoder

  3. Performance

    • 4-step generation in 2-4 seconds

    • Batch processing up to 8 images

    • Native RMSNorm in PyTorch 2.5.1

    • High VRAM mode for 24GB+ GPUs

Directory Structure

mcp-comfyui-flux/ ā”œā”€ā”€ src/ # MCP server source │ ā”œā”€ā”€ index.js # MCP protocol handler │ ā”œā”€ā”€ comfyui-client.js # WebSocket client │ └── workflows/ # ComfyUI workflows ā”œā”€ā”€ models/ # Model storage │ ā”œā”€ā”€ unet/ # FLUX models (fp8) │ ā”œā”€ā”€ clip/ # Text encoders │ ā”œā”€ā”€ vae/ # VAE models │ └── upscale_models/ # Upscaling models ā”œā”€ā”€ output/ # Generated images ā”œā”€ā”€ scripts/ # Utility scripts ā”œā”€ā”€ docker-compose.yml # Service orchestration ā”œā”€ā”€ Dockerfile.comfyui # Optimized ComfyUI ā”œā”€ā”€ Dockerfile.mcp # MCP server ā”œā”€ā”€ requirements.txt # Python dependencies ā”œā”€ā”€ build.sh # Build script └── install.sh # Automated installer

šŸ”’ Security

  • Local Execution: All processing happens locally

  • No External APIs: Except model downloads from Hugging Face

  • Container Isolation: Services run in isolated containers

  • Non-root Execution: Containers run as non-root user

  • Token Security: Stored in .env (gitignored)

šŸ“„ Additional Documentation

šŸ¤ Contributing

Contributions welcome! Please:

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes

  4. Submit a pull request

šŸ“ License

MIT License - see LICENSE file for details.

šŸ™ Acknowledgments


Made with ā¤ļø for efficient AI image generation

-
security - not tested
A
license - permissive license
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dhofheinz/mcp-comfyui-flux'

If you have feedback or need assistance with the MCP directory API, please join our Discord server