text_to_image
Generate images from text prompts using ComfyUI's stable diffusion pipeline with configurable parameters for seed, steps, CFG scale, and denoise strength.
Instructions
Generate an image from a prompt.
Args:
prompt: The prompt to generate the image from.
seed: The seed to use for the image generation.
steps: The number of steps to use for the image generation.
cfg: The CFG scale to use for the image generation.
denoise: The denoise strength to use for the image generation.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | ||
| seed | Yes | ||
| steps | Yes | ||
| cfg | Yes | ||
| denoise | Yes |
Implementation Reference
- src/server.py:14-31 (handler)The handler function for the text_to_image MCP tool, including registration via @mcp.tool(), type hints serving as schema, and core logic delegating to ComfyUI.process_workflow.@mcp.tool() async def text_to_image(prompt: str, seed: int, steps: int, cfg: float, denoise: float) -> Any: """Generate an image from a prompt. Args: prompt: The prompt to generate the image from. seed: The seed to use for the image generation. steps: The number of steps to use for the image generation. cfg: The CFG scale to use for the image generation. denoise: The denoise strength to use for the image generation. """ auth = os.environ.get("COMFYUI_AUTHENTICATION") comfy = ComfyUI( url=f'http://{os.environ.get("COMFYUI_HOST", "localhost")}:{os.environ.get("COMFYUI_PORT", 8188)}', authentication=auth ) images = await comfy.process_workflow("text_to_image", {"prompt": prompt, "seed": seed, "steps": steps, "cfg": cfg, "denoise": denoise}, return_url=os.environ.get("RETURN_URL", "true").lower() == "true") return images
- src/client/comfyui.py:44-68 (helper)Helper method in ComfyUI client that loads the 'text_to_image.json' workflow file when passed 'text_to_image' as string, updates params, queues the prompt to ComfyUI server, and waits for images via websocket.async def process_workflow(self, workflow: Any, params: Dict[str, Any], return_url: bool = False): if isinstance(workflow, str): workflow_path = os.path.join(os.environ.get("WORKFLOW_DIR", "workflows"), f"{workflow}.json") if not os.path.exists(workflow_path): raise Exception(f"Workflow {workflow} not found") with open(workflow_path, "r", encoding='utf-8') as f: prompt = json.load(f) else: prompt = workflow self.update_workflow_params(prompt, params) ws = websocket.WebSocket() ws_url = f"ws://{os.environ.get("COMFYUI_HOST", "localhost")}:{os.environ.get("COMFYUI_PORT", 8188)}/ws?clientId={self.client_id}" if self.authentication: ws.connect(ws_url, header=[f"Authorization: {self.authentication}"]) else: ws.connect(ws_url) try: images = self.get_images(ws, prompt, return_url) return images finally: ws.close()
- src/client/comfyui.py:104-124 (helper)Helper method that updates specific nodes in the workflow JSON with the provided parameters (prompt as 'text' for CLIPTextEncode, seed/steps/cfg/denoise for KSampler), used by the text_to_image tool.def update_workflow_params(self, prompt, params): if not params: return for node in prompt.values(): if node["class_type"] == "CLIPTextEncode" and "text" in params: if isinstance(node["inputs"]["text"], str): node["inputs"]["text"] = params["text"] elif node["class_type"] == "KSampler": if "seed" in params: node["inputs"]["seed"] = params["seed"] if "steps" in params: node["inputs"]["steps"] = params["steps"] if "cfg" in params: node["inputs"]["cfg"] = params["cfg"] if "denoise" in params: node["inputs"]["denoise"] = params["denoise"] elif node["class_type"] == "LoadImage" and "image" in params: node["inputs"]["image"] = params["image"]