# MCP Tools API Documentation
Complete API reference for the MCP ComfyUI Flux server tools.
## Overview
The MCP ComfyUI Flux server implements the Model Context Protocol (MCP) and provides tools for AI-assisted image generation using FLUX models through ComfyUI. The optimized implementation uses PyTorch 2.5.1 with fp8 quantized models for efficient GPU memory usage.
### Base Configuration
```javascript
{
"name": "mcp-comfyui-flux",
"version": "1.0.0",
"capabilities": {
"tools": {}
}
}
```
## Available Tools
### 1. connect_comfyui
Establishes a WebSocket connection to the ComfyUI server. Auto-connects on startup when running in Docker.
#### Request Schema
```typescript
{
name: "connect_comfyui",
description: "Connect to ComfyUI server",
inputSchema: {
type: "object",
properties: {
server_address: {
type: "string",
description: "ComfyUI server address (default: uses Docker network)",
default: "comfyui:8188" // In Docker mode
}
}
}
}
```
#### Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `server_address` | string | No | `comfyui:8188` (Docker) or `127.0.0.1:8188` | ComfyUI server address |
#### Response
```typescript
{
content: [
{
type: "text",
text: "Successfully connected to ComfyUI at <address>"
}
]
}
```
#### Example Usage
```javascript
// Auto-connects on startup in Docker
// Manual connection only needed if disconnected
await connect_comfyui({});
```
---
### 2. generate_image
Generates an image using FLUX schnell fp8 model with optimized defaults for fast generation.
#### Request Schema
```typescript
{
name: "generate_image",
description: "Generate an image using FLUX schnell fp8 model in ComfyUI",
inputSchema: {
type: "object",
properties: {
prompt: {
type: "string",
description: "The text prompt to generate an image from"
},
negative_prompt: {
type: "string",
description: "Negative prompt to avoid certain features",
default: ""
},
width: {
type: "number",
description: "Width of the generated image",
default: 1024
},
height: {
type: "number",
description: "Height of the generated image",
default: 1024
},
steps: {
type: "number",
description: "Number of sampling steps (optimized for schnell)",
default: 4 // FLUX schnell is designed for 4-step generation
},
cfg_scale: {
type: "number",
description: "Classifier-free guidance scale",
default: 1.0 // Low CFG optimal for schnell
},
seed: {
type: "number",
description: "Random seed for reproducibility (-1 for random)",
default: -1
},
sampler_name: {
type: "string",
description: "Sampling method to use",
default: "euler",
enum: [
"euler",
"euler_ancestral",
"heun",
"dpm_2",
"dpm_2_ancestral",
"lms",
"dpm_fast",
"dpm_adaptive",
"dpmpp_2s_ancestral",
"dpmpp_sde",
"dpmpp_2m",
"dpmpp_3m_sde"
]
},
scheduler: {
type: "string",
description: "Scheduler to use",
default: "simple", // Optimal for schnell
enum: [
"normal",
"karras",
"exponential",
"simple",
"ddim_uniform"
]
},
batch_size: {
type: "number",
description: "Number of images to generate in parallel",
default: 1,
minimum: 1,
maximum: 8
}
},
required: ["prompt"]
}
}
```
#### Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `prompt` | string | **Yes** | - | Text description of the desired image |
| `negative_prompt` | string | No | `""` | Features to avoid in the generated image |
| `width` | number | No | `1024` | Image width in pixels (512-2048) |
| `height` | number | No | `1024` | Image height in pixels (512-2048) |
| `steps` | number | No | `4` | Number of denoising steps (FLUX schnell optimized for 4) |
| `cfg_scale` | number | No | `1.0` | Guidance scale (1.0-2.0 for schnell) |
| `seed` | number | No | `-1` | Random seed (-1 for random) |
| `sampler_name` | string | No | `"euler"` | Sampling algorithm |
| `scheduler` | string | No | `"simple"` | Noise scheduler (simple works best with schnell) |
| `batch_size` | number | No | `1` | Number of images to generate (1-8) |
#### Response
```typescript
{
content: [
{
type: "text",
text: "Successfully generated <n> image(s)! Saved to: <filename>"
}
]
}
```
**Note**: Returns file path instead of base64 to avoid memory issues with large images.
#### Optimized Settings for FLUX Schnell
```javascript
// FLUX schnell fp8 optimal settings (default)
{
steps: 4, // Schnell is distilled for 4-step generation
cfg_scale: 1.0, // Low CFG works best
scheduler: "simple", // Matches training
sampler_name: "euler" // Fast and stable
}
```
#### Example Usage
```javascript
// Simple generation with optimized defaults
await generate_image({
prompt: "a serene japanese garden with cherry blossoms"
});
// Batch generation for variations
await generate_image({
prompt: "cyberpunk city at night",
batch_size: 4, // Generate 4 variations
seed: 100 // Seeds will be 100, 101, 102, 103
});
// High quality with specific settings
await generate_image({
prompt: "professional portrait photo, studio lighting",
negative_prompt: "blurry, distorted",
width: 768,
height: 1024,
seed: 42
});
```
---
### 3. upscale_image
Upscale images to 4x resolution using AI upscaling models.
#### Request Schema
```typescript
{
name: "upscale_image",
description: "Upscale an image using AI models",
inputSchema: {
type: "object",
properties: {
image_path: {
type: "string",
description: "Path to the image file to upscale"
},
model: {
type: "string",
description: "Upscaling model to use",
default: "ultrasharp",
enum: ["ultrasharp", "animesharp"]
},
scale_factor: {
type: "number",
description: "Additional scaling factor (1.0 = 4x native)",
default: 1.0,
minimum: 0.5,
maximum: 2.0
},
content_type: {
type: "string",
description: "Auto-select model based on content",
enum: ["general", "anime", "artwork", "illustration"]
}
},
required: ["image_path"]
}
}
```
#### Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `image_path` | string | **Yes** | - | Path to image (e.g., "flux_output_00001_.png") |
| `model` | string | No | `"ultrasharp"` | Upscaling model |
| `scale_factor` | number | No | `1.0` | Additional scaling (0.5-2.0) |
| `content_type` | string | No | - | Auto-select model by content type |
#### Available Models
- **4x-UltraSharp**: General purpose, excellent for photos and realistic images
- **4x-AnimeSharp**: Optimized for anime, illustrations, and artwork
#### Response
```typescript
{
content: [
{
type: "text",
text: "Successfully upscaled image! Saved to: upscaled_<timestamp>.png"
}
]
}
```
#### Example Usage
```javascript
// General upscaling
await upscale_image({
image_path: "flux_output_00001_.png",
model: "ultrasharp"
});
// Anime/artwork upscaling
await upscale_image({
image_path: "output/anime_art.png",
model: "animesharp"
});
// Auto-select model
await upscale_image({
image_path: "output/image.png",
content_type: "anime" // Will use animesharp
});
```
---
### 4. remove_background
Remove background from images using RMBG-2.0 AI model.
#### Request Schema
```typescript
{
name: "remove_background",
description: "Remove background from an image using AI",
inputSchema: {
type: "object",
properties: {
image_path: {
type: "string",
description: "Path to the image file"
},
alpha_matting: {
type: "boolean",
description: "Use alpha matting for better edges",
default: true
},
output_format: {
type: "string",
description: "Output format",
default: "png",
enum: ["png", "webp"]
}
},
required: ["image_path"]
}
}
```
#### Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `image_path` | string | **Yes** | - | Path to image file |
| `alpha_matting` | boolean | No | `true` | Use alpha matting for hair/fur |
| `output_format` | string | No | `"png"` | Output format (png/webp) |
#### Response
```typescript
{
content: [
{
type: "text",
text: "Successfully removed background! Saved to: bg_removed_<timestamp>.png"
}
]
}
```
#### Example Usage
```javascript
// Simple background removal
await remove_background({
image_path: "flux_output_00001_.png"
});
// Without alpha matting (sharper edges)
await remove_background({
image_path: "output/portrait.png",
alpha_matting: false
});
```
---
### 5. check_models
Verifies which models are available in ComfyUI.
#### Request Schema
```typescript
{
name: "check_models",
description: "Check available models in ComfyUI",
inputSchema: {
type: "object",
properties: {}
}
}
```
#### Response
```typescript
{
content: [
{
type: "text",
text: "Available models:\n- FLUX schnell fp8 (11GB)\n- T5-XXL fp8 (4.9GB)\n- CLIP-L (235MB)\n- VAE (320MB)\n- RMBG-2.0\n- 4x-UltraSharp\n- 4x-AnimeSharp"
}
]
}
```
---
### 6. disconnect_comfyui
Closes the connection to the ComfyUI server.
#### Request Schema
```typescript
{
name: "disconnect_comfyui",
description: "Disconnect from ComfyUI server",
inputSchema: {
type: "object",
properties: {}
}
}
```
---
## Workflow Templates
### Optimized FLUX Schnell fp8 Workflow
The server uses an optimized workflow with fp8 quantized models:
```javascript
{
// Separate loaders for fp8 models
unet_loader: "flux1-schnell-fp8-e4m3fn.safetensors", // 11GB
clip_loaders: [
"t5xxl_fp8_e4m3fn_scaled.safetensors", // 4.9GB
"clip_l.safetensors" // 235MB
],
vae_loader: "ae.safetensors", // 320MB
// Optimized settings for schnell
sampler_settings: {
steps: 4, // Distilled for 4-step generation
cfg_scale: 1.0, // Low guidance optimal
scheduler: "simple", // Matches training
sampler_name: "euler"
}
}
```
## System Architecture
### Docker Container Setup
```yaml
ComfyUI Container:
- Ubuntu 22.04 with CUDA 12.1
- PyTorch 2.5.1 (latest stable)
- Python 3.11
- FLUX schnell fp8 model
- Custom nodes: Manager, KJNodes, RMBG
- Runs with --highvram flag
MCP Server Container:
- Node.js 20 Alpine
- Auto-connects to ComfyUI on startup
- WebSocket client for real-time updates
```
### Memory Optimization
| Component | Size | Notes |
|-----------|------|-------|
| FLUX schnell fp8 | 11GB | 50% smaller than fp16 |
| T5-XXL fp8 | 4.9GB | Quantized text encoder |
| CLIP-L | 235MB | Standard CLIP model |
| VAE | 320MB | Autoencoder |
| **Total VRAM** | ~10GB | With --highvram flag |
## Performance Metrics
### Generation Speed (RTX 4090)
| Resolution | Batch Size | Time per Image |
|------------|------------|----------------|
| 1024x1024 | 1 | ~2-4 seconds |
| 1024x1024 | 4 | ~1.5 seconds each |
| 768x768 | 1 | ~1.5-3 seconds |
| 1536x1536 | 1 | ~4-6 seconds |
### Optimizations
- **fp8 Quantization**: 50% memory reduction, 95% quality retention
- **BuildKit Cache**: Faster Docker rebuilds
- **Batch Processing**: Native ComfyUI batch support
- **--highvram Flag**: Keeps models in VRAM
## Best Practices
### 1. Connection Management
```javascript
// Auto-connects on startup in Docker
// Only reconnect if needed
try {
await generate_image({ prompt: "test" });
} catch (error) {
if (error.includes("Not connected")) {
await connect_comfyui({});
}
}
```
### 2. Prompt Engineering
```javascript
// Be descriptive for best results
{
prompt: "serene japanese garden, cherry blossoms, koi pond, morning mist, soft sunlight",
// Schnell works well without negative prompts
negative_prompt: ""
}
// Style keywords work well
{
prompt: "portrait, oil painting style, dramatic lighting"
}
// Quality modifiers
{
prompt: "high quality, detailed, sharp focus, professional photography"
}
```
### 3. Optimal Settings by Use Case
```javascript
// Fast generation (default)
{
steps: 4,
cfg_scale: 1.0,
scheduler: "simple"
}
// Batch variations
{
batch_size: 4,
seed: 100 // Sequential seeds
}
// Reproducible results
{
seed: 42, // Fixed seed
sampler_name: "euler"
}
```
### 4. Workflow Combinations
```javascript
// Generate and upscale
const result = await generate_image({
prompt: "landscape painting"
});
await upscale_image({
image_path: "flux_output_00001_.png",
model: "ultrasharp"
});
// Generate and remove background
const result = await generate_image({
prompt: "product photo on white background"
});
await remove_background({
image_path: "flux_output_00001_.png"
});
```
## Integration Examples
### Claude Desktop Configuration
```json
{
"mcpServers": {
"comfyui-flux": {
"command": "docker",
"args": [
"compose", "-p", "mcp-comfyui-flux",
"exec", "-T", "mcp-server",
"node", "/app/src/index.js"
]
}
}
}
```
### WSL2 Configuration (Windows)
```json
{
"mcpServers": {
"comfyui-flux": {
"command": "wsl.exe",
"args": [
"bash", "-c",
"cd /path/to/mcp-comfyui-flux && docker exec -i mcp-comfyui-flux-mcp-server-1 node /app/src/index.js"
]
}
}
}
```
## Error Handling
### Common Errors and Solutions
| Error | Cause | Solution |
|-------|-------|----------|
| `Not connected to ComfyUI` | Connection lost | Auto-reconnects on next request |
| `Out of memory` | Image too large | Reduce resolution or batch size |
| `Model not found` | Missing model files | Run `./scripts/download-models.sh` |
| `Host 'localhost:8188' cannot contain ':'` | aiohttp version issue | Update already applied in optimized build |
## Environment Variables
```bash
# GPU Configuration
CUDA_VISIBLE_DEVICES=0 # Use first GPU
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 # Memory optimization
# Model Precision
MODEL_PRECISION=fp16 # Operations precision (models are fp8)
# ComfyUI Settings
COMFYUI_HOST=comfyui # Docker service name
COMFYUI_PORT=8188
```
## Health Monitoring
### Check Container Health
```bash
docker ps --format "table {{.Names}}\t{{.Status}}"
```
### Check GPU Usage
```bash
docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi
```
### View Logs
```bash
docker-compose -p mcp-comfyui-flux logs -f
```
## Version History
### v1.0.0 (Current - Optimized)
- PyTorch 2.5.1 with native RMSNorm support
- FLUX schnell fp8 quantized models
- Auto-connect on startup
- Batch generation support
- Background removal with RMBG-2.0
- AI upscaling with 4x models
- BuildKit optimizations
- 47% smaller Docker images
- Fixed aiohttp/yarl compatibility
### Planned Features
- Image-to-image workflows
- ControlNet integration
- LoRA support
- SDXL model support
- Real-time generation progress
- Multi-GPU support
## System Requirements
### Minimum
- 16GB RAM
- 12GB VRAM (for fp8 models)
- 50GB disk space
- Docker 20.10+
### Recommended
- 20GB+ RAM
- 16GB+ VRAM
- 100GB disk space
- NVIDIA RTX 3060 or better
### Optimal
- 32GB RAM
- 24GB VRAM
- 200GB disk space
- NVIDIA RTX 4090
## Support
For issues or questions:
- GitHub Issues: [mcp-comfyui-flux/issues](https://github.com/yourusername/mcp-comfyui-flux/issues)
- Documentation: [CLAUDE.md](./CLAUDE.md)
- Setup Guide: [README.md](./README.md)