Skip to main content
Glama
jonpojonpo

ComfyUI MCP Server

by jonpojonpo

generate_image

Create images from text descriptions using ComfyUI's AI generation capabilities. Specify what to include and exclude for customized visual outputs.

Instructions

Generate an image using ComfyUI

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesPositive prompt describing what you want in the image
negative_promptNoNegative prompt describing what you don't wantbad hands, bad quality
seedNoSeed for reproducible generation
widthNoImage width in pixels
heightNoImage height in pixels

Implementation Reference

  • Registers the generate_image tool with the MCP server by returning it in the list_tools handler, including description and input schema.
    @self.app.list_tools()
    async def list_tools() -> List[Tool]:
        """List available image generation tools."""
        return [
            Tool(
                name="generate_image",
                description="Generate an image using ComfyUI",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "prompt": {
                            "type": "string",
                            "description": "Positive prompt describing what you want in the image"
                        },
                        "negative_prompt": {
                            "type": "string",
                            "description": "Negative prompt describing what you don't want",
                            "default": "bad hands, bad quality"
                        },
                        "seed": {
                            "type": "number",
                            "description": "Seed for reproducible generation",
                            "default": 8566257
                        },
                        "width": {
                            "type": "number",
                            "description": "Image width in pixels",
                            "default": 512
                        },
                        "height": {
                            "type": "number",
                            "description": "Image height in pixels",
                            "default": 512
                        }
                    },
                    "required": ["prompt"]
                }
            )
        ]
  • Executes the generate_image tool: builds ComfyUI workflow JSON, queues prompt via HTTP, listens on WebSocket for execution completion and binary image data.
    async def generate_image(
        self,
        prompt: str,
        negative_prompt: str,
        seed: int,
        width: int,
        height: int
    ) -> bytes:
        """Generate an image using ComfyUI."""
        # Construct ComfyUI workflow
        workflow = {
            "4": {
                "class_type": "CheckpointLoaderSimple",
                "inputs": {
                    "ckpt_name": "v1-5-pruned-emaonly.safetensors"
                }
            },
            "5": {
                "class_type": "EmptyLatentImage",
                "inputs": {
                    "batch_size": 1,
                    "height": height,
                    "width": width
                }
            },
            "6": {
                "class_type": "CLIPTextEncode",
                "inputs": {
                    "clip": ["4", 1],
                    "text": prompt
                }
            },
            "7": {
                "class_type": "CLIPTextEncode",
                "inputs": {
                    "clip": ["4", 1],
                    "text": negative_prompt
                }
            },
            "3": {
                "class_type": "KSampler",
                "inputs": {
                    "cfg": 8,
                    "denoise": 1,
                    "latent_image": ["5", 0],
                    "model": ["4", 0],
                    "negative": ["7", 0],
                    "positive": ["6", 0],
                    "sampler_name": "euler",
                    "scheduler": "normal",
                    "seed": seed,
                    "steps": 20
                }
            },
            "8": {
                "class_type": "VAEDecode",
                "inputs": {
                    "samples": ["3", 0],
                    "vae": ["4", 2]
                }
            },
            "save_image_websocket": {
                "class_type": "SaveImageWebsocket",
                "inputs": {
                    "images": ["8", 0]
                }
            },
            "save_image": {
                "class_type": "SaveImage",
                "inputs": {
                    "images": ["8", 0],
                    "filename_prefix": "mcp"
                }
            }
        }
    
        try:
            prompt_response = await self.queue_prompt(workflow)
            logger.info(f"Queued prompt, got response: {prompt_response}")
            prompt_id = prompt_response["prompt_id"]
        except Exception as e:
            logger.error(f"Error queuing prompt: {e}")
            raise
    
        uri = f"ws://{self.config.server_address}/ws?clientId={self.config.client_id}"
        logger.info(f"Connecting to websocket at {uri}")
        
        async with websockets.connect(uri) as websocket:
            while True:
                try:
                    message = await websocket.recv()
                    
                    if isinstance(message, str):
                        try:
                            data = json.loads(message)
                            logger.info(f"Received text message: {data}")
                            
                            if data.get("type") == "executing":
                                exec_data = data.get("data", {})
                                if exec_data.get("prompt_id") == prompt_id:
                                    node = exec_data.get("node")
                                    logger.info(f"Processing node: {node}")
                                    if node is None:
                                        logger.info("Generation complete signal received")
                                        break
                        except:
                            pass
                    else:
                        logger.info(f"Received binary message of length: {len(message)}")
                        if len(message) > 8:  # Check if we have actual image data
                            return message[8:]  # Remove binary header
                        else:
                            logger.warning(f"Received short binary message: {message}")
                
                except websockets.exceptions.ConnectionClosed as e:
                    logger.error(f"WebSocket connection closed: {e}")
                    break
                except Exception as e:
                    logger.error(f"Error processing message: {e}")
                    continue
    
        raise RuntimeError("No valid image data received")
  • Input schema defining parameters for the generate_image tool: prompt (required), negative_prompt, seed, width, height with defaults.
    inputSchema={
        "type": "object",
        "properties": {
            "prompt": {
                "type": "string",
                "description": "Positive prompt describing what you want in the image"
            },
            "negative_prompt": {
                "type": "string",
                "description": "Negative prompt describing what you don't want",
                "default": "bad hands, bad quality"
            },
            "seed": {
                "type": "number",
                "description": "Seed for reproducible generation",
                "default": 8566257
            },
            "width": {
                "type": "number",
                "description": "Image width in pixels",
                "default": 512
            },
            "height": {
                "type": "number",
                "description": "Image height in pixels",
                "default": 512
            }
        },
        "required": ["prompt"]
    }
  • Helper function to queue the ComfyUI workflow prompt via HTTP POST to /prompt endpoint and return the prompt_id.
    async def queue_prompt(self, prompt: Dict[str, Any]) -> Dict[str, Any]:
        """Queue a prompt with ComfyUI."""
        async with aiohttp.ClientSession() as session:
            try:
                async with session.post(
                    f"http://{self.config.server_address}/prompt",
                    json={
                        "prompt": prompt,
                        "client_id": self.config.client_id
                    }
                ) as response:
                    if response.status != 200:
                        text = await response.text()
                        raise RuntimeError(f"Failed to queue prompt: {response.status} - {text}")
                    return await response.json()
            except aiohttp.ClientError as e:
                raise RuntimeError(f"HTTP request failed: {e}")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'using ComfyUI' but doesn't explain what that entails—e.g., whether it's a local/remote service, latency, rate limits, authentication needs, or output format (e.g., image file, URL). This leaves significant gaps in understanding how the tool behaves beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words: 'Generate an image using ComfyUI'. It's front-loaded and appropriately sized for the tool's complexity, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema, no annotations), the description is incomplete. It doesn't cover behavioral aspects like how images are returned (e.g., as files, base64), error handling, or usage constraints. With no output schema, the description should ideally hint at return values, but it doesn't, leaving the agent under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the input schema (e.g., prompt, negative_prompt, seed, width, height). The description adds no additional semantic context about parameters, such as typical values or constraints, so it relies entirely on the schema. This meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Generate an image using ComfyUI' states the basic action (generate) and resource (image) with the specific tool (ComfyUI), but it lacks detail about what kind of image generation (e.g., AI-based, from text prompts) or any distinguishing features. With no sibling tools, differentiation isn't needed, but the purpose remains somewhat vague beyond the high-level action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool, such as scenarios for image generation, prerequisites, or alternatives. It simply states the action without context, leaving the agent to infer usage based on the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jonpojonpo/comfy-ui-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server