Skip to main content
Glama

edit_image

Modify images by describing changes in natural language. Upload an image URL and provide instructions like 'make the sky more dramatic' or 'change the car color to red' to apply AI-powered edits.

Instructions

Edit an image using natural language instructions. Describe what changes you want and the AI will apply them.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
image_urlYesURL of the image to edit (use upload_file for local images)
instructionYesNatural language description of the edit (e.g., 'make the sky more dramatic', 'change the car color to red')
modelNoEditing model. Options: fal-ai/flux-2/edit, fal-ai/flux-2-pro/edit (higher quality)fal-ai/flux-2/edit
strengthNoHow much to change the image (0=minimal, 1=maximum)
seedNoSeed for reproducible edits

Implementation Reference

  • The handler function that implements the core logic for the 'edit_image' tool. It resolves the model, prepares fal_args with image_url and instruction, executes via queue_strategy, handles errors/timeouts, extracts the output URL, and formats the response.
    async def handle_edit_image(
        arguments: Dict[str, Any],
        registry: ModelRegistry,
        queue_strategy: QueueStrategy,
    ) -> List[TextContent]:
        """Handle the edit_image tool for natural language image editing."""
        model_input = arguments.get("model", "fal-ai/flux-2/edit")
        try:
            model_id = await registry.resolve_model_id(model_input)
        except ValueError as e:
            return [
                TextContent(
                    type="text",
                    text=f"❌ {e}. Use list_models to see available options.",
                )
            ]
    
        fal_args: Dict[str, Any] = {
            "image_urls": [arguments["image_url"]],  # Flux 2 Edit expects array
            "prompt": arguments["instruction"],
        }
    
        # Add optional parameters
        if "strength" in arguments:
            fal_args["strength"] = arguments["strength"]
        if "seed" in arguments:
            fal_args["seed"] = arguments["seed"]
    
        logger.info(
            "Starting image edit with %s: '%s'", model_id, arguments["instruction"][:50]
        )
    
        try:
            result = await asyncio.wait_for(
                queue_strategy.execute_fast(model_id, fal_args),
                timeout=90,
            )
        except asyncio.TimeoutError:
            logger.error("Image edit timed out for %s", model_id)
            return [
                TextContent(
                    type="text",
                    text="❌ Image editing timed out after 90 seconds. Please try again.",
                )
            ]
        except Exception as e:
            logger.exception("Image editing failed: %s", e)
            return [
                TextContent(
                    type="text",
                    text=f"❌ Image editing failed: {e}",
                )
            ]
    
        # Check for error in response
        if "error" in result:
            error_msg = result.get("error", "Unknown error")
            logger.error("Image editing failed for %s: %s", model_id, error_msg)
            return [
                TextContent(
                    type="text",
                    text=f"❌ Image editing failed: {error_msg}",
                )
            ]
    
        # Extract the result image URL - Flux 2 edit returns {"images": [{"url": "..."}]}
        images = result.get("images", [])
        if images:
            output_url = images[0].get("url") if isinstance(images[0], dict) else images[0]
        else:
            # Fallback to other common response formats
            image_data = result.get("image", {})
            if isinstance(image_data, dict):
                output_url = image_data.get("url")
            else:
                output_url = result.get("image_url")
    
        if not output_url:
            logger.warning("Image edit returned no image. Result: %s", result)
            return [
                TextContent(
                    type="text",
                    text="❌ Image editing completed but no image was returned.",
                )
            ]
    
        response = "✏️ Image edited successfully!\n\n"
        response += f"**Instruction**: {arguments['instruction']}\n\n"
        response += f"**Result**: {output_url}"
        return [TextContent(type="text", text=response)]
  • The Tool schema definition for 'edit_image', including inputSchema with properties like image_url, instruction, model, strength, seed, and required fields.
    Tool(
        name="edit_image",
        description="Edit an image using natural language instructions. Describe what changes you want and the AI will apply them.",
        inputSchema={
            "type": "object",
            "properties": {
                "image_url": {
                    "type": "string",
                    "description": "URL of the image to edit (use upload_file for local images)",
                },
                "instruction": {
                    "type": "string",
                    "description": "Natural language description of the edit (e.g., 'make the sky more dramatic', 'change the car color to red')",
                },
                "model": {
                    "type": "string",
                    "default": "fal-ai/flux-2/edit",
                    "description": "Editing model. Options: fal-ai/flux-2/edit, fal-ai/flux-2-pro/edit (higher quality)",
                },
                "strength": {
                    "type": "number",
                    "default": 0.75,
                    "minimum": 0.0,
                    "maximum": 1.0,
                    "description": "How much to change the image (0=minimal, 1=maximum)",
                },
                "seed": {
                    "type": "integer",
                    "description": "Seed for reproducible edits",
                },
            },
            "required": ["image_url", "instruction"],
        },
    ),
  • The TOOL_HANDLERS dictionary that registers 'edit_image' mapped to handle_edit_image function, used by the call_tool handler to route requests.
    TOOL_HANDLERS = {
        # Utility tools (no queue needed)
        "list_models": handle_list_models,
        "recommend_model": handle_recommend_model,
        "get_pricing": handle_get_pricing,
        "get_usage": handle_get_usage,
        "upload_file": handle_upload_file,
        # Image generation tools
        "generate_image": handle_generate_image,
        "generate_image_structured": handle_generate_image_structured,
        "generate_image_from_image": handle_generate_image_from_image,
        # Image editing tools
        "remove_background": handle_remove_background,
        "upscale_image": handle_upscale_image,
        "edit_image": handle_edit_image,
        "inpaint_image": handle_inpaint_image,
        "resize_image": handle_resize_image,
        "compose_images": handle_compose_images,
        # Video tools
        "generate_video": handle_generate_video,
        "generate_video_from_image": handle_generate_video_from_image,
        "generate_video_from_video": handle_generate_video_from_video,
        # Audio tools
        "generate_music": handle_generate_music,
    }
  • Import statement in server.py that brings in the handle_edit_image handler for registration in TOOL_HANDLERS.
    from fal_mcp_server.handlers import (
        handle_compose_images,
        handle_edit_image,
        handle_generate_image,
        handle_generate_image_from_image,
        handle_generate_image_structured,
        handle_generate_music,
        handle_generate_video,
        handle_generate_video_from_image,
        handle_generate_video_from_video,
        handle_get_pricing,
        handle_get_usage,
        handle_inpaint_image,
        handle_list_models,
        handle_recommend_model,
        handle_remove_background,
        handle_resize_image,
        handle_upload_file,
        handle_upscale_image,
    )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states that 'the AI will apply' changes, implying a mutation operation, but doesn't specify if this is destructive, requires authentication, has rate limits, or what the output format is. For a tool with 5 parameters and no annotations, this is inadequate, as it misses key behavioral traits like response handling or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded: two sentences that directly state the tool's function and how to use it, with no wasted words. Every sentence earns its place by clarifying the core action and input method, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of an image editing tool with 5 parameters, no annotations, and no output schema, the description is incomplete. It doesn't address behavioral aspects like what the tool returns (e.g., a modified image URL), error handling, or usage constraints. For a mutation tool without structured output information, this leaves significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema, only implying natural language input for 'instruction'. It doesn't explain parameter interactions or provide additional context, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Edit an image using natural language instructions.' It specifies the verb ('edit') and resource ('image'), distinguishing it from sibling tools like 'generate_image' or 'resize_image'. However, it doesn't explicitly differentiate from 'inpaint_image' or 'compose_images', which might also involve image editing, so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance: it mentions using 'upload_file for local images' in the schema, but the description itself lacks explicit when-to-use instructions. It doesn't clarify when to choose this tool over alternatives like 'inpaint_image' or 'resize_image', nor does it mention prerequisites or exclusions. This leaves the agent with little contextual direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raveenb/fal-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server