generate_image
Create images from text descriptions using AI models like Flux and Stable Diffusion. Specify prompts, model type, image size, and other parameters to generate custom visuals.
Instructions
Generate images from text prompts using various models (fast, uses async API)
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| image_size | No | landscape_16_9 | |
| model | No | Model to use for generation | flux_schnell |
| negative_prompt | No | What to avoid in the image | |
| num_images | No | ||
| prompt | Yes | Text description of the image to generate | |
| seed | No | Seed for reproducible generation |
Implementation Reference
- src/fal_mcp_server/server.py:189-217 (handler)Handler logic for the 'generate_image' tool: constructs arguments from input, calls fal_client.run_async with the selected model, extracts image URLs from result, and returns formatted text content with the URLs.if name == "generate_image": model_key = arguments.get("model", "flux_schnell") model_id = MODELS["image"][model_key] fal_args = { "prompt": arguments["prompt"], "image_size": arguments.get("image_size", "landscape_16_9"), "num_images": arguments.get("num_images", 1), } # Add optional parameters if "negative_prompt" in arguments: fal_args["negative_prompt"] = arguments["negative_prompt"] if "seed" in arguments: fal_args["seed"] = arguments["seed"] # Use native async API for fast image generation result = await fal_client.run_async(model_id, arguments=fal_args) images = result.get("images", []) if images: urls = [img["url"] for img in images] response = ( f"🎨 Generated {len(urls)} image(s) with {model_key} (async):\n\n" ) for i, url in enumerate(urls, 1): response += f"Image {i}: {url}\n" return [TextContent(type="text", text=response)]
- src/fal_mcp_server/server.py:81-125 (registration)Registration of the 'generate_image' tool in list_tools(), including name, description, and input schema.Tool( name="generate_image", description="Generate images from text prompts using various models (fast, uses async API)", inputSchema={ "type": "object", "properties": { "prompt": { "type": "string", "description": "Text description of the image to generate", }, "model": { "type": "string", "enum": list(MODELS["image"].keys()), "default": "flux_schnell", "description": "Model to use for generation", }, "negative_prompt": { "type": "string", "description": "What to avoid in the image", }, "image_size": { "type": "string", "enum": [ "square", "landscape_4_3", "landscape_16_9", "portrait_3_4", "portrait_9_16", ], "default": "landscape_16_9", }, "num_images": { "type": "integer", "default": 1, "minimum": 1, "maximum": 4, }, "seed": { "type": "integer", "description": "Seed for reproducible generation", }, }, "required": ["prompt"], }, ),
- src/fal_mcp_server/server.py:85-124 (schema)Input schema definition for the 'generate_image' tool, specifying parameters like prompt, model, image_size, etc."type": "object", "properties": { "prompt": { "type": "string", "description": "Text description of the image to generate", }, "model": { "type": "string", "enum": list(MODELS["image"].keys()), "default": "flux_schnell", "description": "Model to use for generation", }, "negative_prompt": { "type": "string", "description": "What to avoid in the image", }, "image_size": { "type": "string", "enum": [ "square", "landscape_4_3", "landscape_16_9", "portrait_3_4", "portrait_9_16", ], "default": "landscape_16_9", }, "num_images": { "type": "integer", "default": 1, "minimum": 1, "maximum": 4, }, "seed": { "type": "integer", "description": "Seed for reproducible generation", }, }, "required": ["prompt"], },
- src/fal_mcp_server/server.py:27-33 (helper)MODELS dictionary mapping image model keys to fal.ai model IDs used in generate_image handler."image": { "flux_schnell": "fal-ai/flux/schnell", "flux_dev": "fal-ai/flux/dev", "flux_pro": "fal-ai/flux-pro", "sdxl": "fal-ai/fast-sdxl", "stable_diffusion": "fal-ai/stable-diffusion-v3-medium", },