Skip to main content
Glama

generate_video_from_image

Animate static images into videos using text prompts to guide motion and effects. Transform photos into dynamic content by describing desired animation sequences.

Instructions

Animate an image into a video. The image serves as the starting frame and the prompt guides the animation. Use upload_file first if you have a local image.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
image_urlYesURL of the image to animate (use upload_file for local images)
promptYesText description guiding how to animate the image (e.g., 'camera slowly pans right, gentle breeze moves the leaves')
modelNoImage-to-video model. Options: fal-ai/wan-i2v, fal-ai/kling-video/v2.1/standard/image-to-videofal-ai/wan-i2v
durationNoVideo duration in seconds
aspect_ratioNoVideo aspect ratio (e.g., '16:9', '9:16', '1:1')16:9
negative_promptNoWhat to avoid in the video (e.g., 'blur, distort, low quality')
cfg_scaleNoClassifier-free guidance scale (0.0-1.0). Lower values give more creative results.

Implementation Reference

  • The handler function that implements the core logic for the 'generate_video_from_image' tool. It processes input arguments, resolves the model ID, prepares Fal.ai API arguments (requiring image_url and prompt), executes the request via queue_strategy with timeout handling, extracts the video URL from the response, and returns success or error messages.
    async def handle_generate_video_from_image(
        arguments: Dict[str, Any],
        registry: ModelRegistry,
        queue_strategy: QueueStrategy,
    ) -> List[TextContent]:
        """Handle the generate_video_from_image tool."""
        model_input = arguments.get("model", "fal-ai/wan-i2v")
        try:
            model_id = await registry.resolve_model_id(model_input)
        except ValueError as e:
            return [
                TextContent(
                    type="text",
                    text=f"❌ {e}. Use list_models to see available options.",
                )
            ]
    
        # Both image_url and prompt are required for this tool
        fal_args: Dict[str, Any] = {
            "image_url": arguments["image_url"],
            "prompt": arguments["prompt"],
        }
        if "duration" in arguments:
            fal_args["duration"] = arguments["duration"]
        if "aspect_ratio" in arguments:
            fal_args["aspect_ratio"] = arguments["aspect_ratio"]
        if "negative_prompt" in arguments:
            fal_args["negative_prompt"] = arguments["negative_prompt"]
        if "cfg_scale" in arguments:
            fal_args["cfg_scale"] = arguments["cfg_scale"]
    
        # Use queue strategy with timeout protection
        logger.info(
            "Starting image-to-video generation with %s from %s",
            model_id,
            (
                arguments["image_url"][:50] + "..."
                if len(arguments["image_url"]) > 50
                else arguments["image_url"]
            ),
        )
        try:
            video_result = await asyncio.wait_for(
                queue_strategy.execute(model_id, fal_args, timeout=180),
                timeout=185,  # Slightly longer than internal timeout
            )
        except asyncio.TimeoutError:
            logger.error(
                "Image-to-video generation timed out after 180s. Model: %s, Image: %s",
                model_id,
                (
                    arguments["image_url"][:50] + "..."
                    if len(arguments["image_url"]) > 50
                    else arguments["image_url"]
                ),
            )
            return [
                TextContent(
                    type="text",
                    text=f"❌ Video generation timed out after 180 seconds with {model_id}",
                )
            ]
    
        if video_result is None:
            return [
                TextContent(
                    type="text",
                    text=f"❌ Video generation failed or timed out with {model_id}",
                )
            ]
    
        # Check for error in response
        if "error" in video_result:
            error_msg = video_result.get("error", "Unknown error")
            return [
                TextContent(
                    type="text",
                    text=f"❌ Video generation failed: {error_msg}",
                )
            ]
    
        # Extract video URL from result
        video_dict = video_result.get("video", {})
        if isinstance(video_dict, dict):
            video_url = video_dict.get("url")
        else:
            video_url = video_result.get("url")
    
        if video_url:
            return [
                TextContent(
                    type="text",
                    text=f"🎬 Video generated with {model_id}: {video_url}",
                )
            ]
    
        return [
            TextContent(
                type="text",
                text="❌ Video generation completed but no video URL was returned. Please try again.",
            )
        ]
  • Defines the input schema and metadata for the 'generate_video_from_image' tool, including required parameters 'image_url' and 'prompt', optional model, duration, aspect_ratio, etc., and description.
    Tool(
        name="generate_video_from_image",
        description="Animate an image into a video. The image serves as the starting frame and the prompt guides the animation. Use upload_file first if you have a local image.",
        inputSchema={
            "type": "object",
            "properties": {
                "image_url": {
                    "type": "string",
                    "description": "URL of the image to animate (use upload_file for local images)",
                },
                "prompt": {
                    "type": "string",
                    "description": "Text description guiding how to animate the image (e.g., 'camera slowly pans right, gentle breeze moves the leaves')",
                },
                "model": {
                    "type": "string",
                    "default": "fal-ai/wan-i2v",
                    "description": "Image-to-video model. Options: fal-ai/wan-i2v, fal-ai/kling-video/v2.1/standard/image-to-video",
                },
                "duration": {
                    "type": "integer",
                    "default": 5,
                    "minimum": 2,
                    "maximum": 10,
                    "description": "Video duration in seconds",
                },
                "aspect_ratio": {
                    "type": "string",
                    "default": "16:9",
                    "description": "Video aspect ratio (e.g., '16:9', '9:16', '1:1')",
                },
                "negative_prompt": {
                    "type": "string",
                    "description": "What to avoid in the video (e.g., 'blur, distort, low quality')",
                },
                "cfg_scale": {
                    "type": "number",
                    "default": 0.5,
                    "description": "Classifier-free guidance scale (0.0-1.0). Lower values give more creative results.",
                },
            },
            "required": ["image_url", "prompt"],
        },
    ),
  • Registers the 'generate_video_from_image' tool by mapping its name to the handle_generate_video_from_image handler function in the TOOL_HANDLERS dictionary, used by the MCP server's call_tool method to route requests.
    TOOL_HANDLERS = {
        # Utility tools (no queue needed)
        "list_models": handle_list_models,
        "recommend_model": handle_recommend_model,
        "get_pricing": handle_get_pricing,
        "get_usage": handle_get_usage,
        "upload_file": handle_upload_file,
        # Image generation tools
        "generate_image": handle_generate_image,
        "generate_image_structured": handle_generate_image_structured,
        "generate_image_from_image": handle_generate_image_from_image,
        # Image editing tools
        "remove_background": handle_remove_background,
        "upscale_image": handle_upscale_image,
        "edit_image": handle_edit_image,
        "inpaint_image": handle_inpaint_image,
        "resize_image": handle_resize_image,
        "compose_images": handle_compose_images,
        # Video tools
        "generate_video": handle_generate_video,
        "generate_video_from_image": handle_generate_video_from_image,
        "generate_video_from_video": handle_generate_video_from_video,
        # Audio tools
        "generate_music": handle_generate_music,
    }
  • No, wrong fileId. Wait, this is import in server.py actually line 28 in server.py imports it.
        TextContent(
            type="text",
            text=f"❌ {e}. Use list_models to see available options.",
        )
    ]
  • Exports the handle_generate_video_from_image function via __all__ for easy import in server files.
    "handle_generate_video_from_image",
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the basic behavioral trait that the image serves as the starting frame and prompt guides animation, but doesn't mention important aspects like rate limits, authentication needs, output format (video file type), processing time, or error conditions. For a complex video generation tool with 7 parameters, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence states the core purpose, second provides crucial usage guidance. Both sentences earn their place by adding value beyond what's in the schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex video generation tool with 7 parameters and no annotations or output schema, the description is incomplete. It doesn't explain what the tool returns (video file format, URL, metadata), error handling, rate limits, or processing characteristics. While purpose and basic usage are clear, behavioral context is insufficient for a tool of this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds minimal value beyond the schema - it mentions the image serves as starting frame and prompt guides animation, which the schema already covers for image_url and prompt parameters. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('animate an image into a video') and distinguishes it from siblings by specifying it uses an image as the starting frame with a prompt to guide animation. It differentiates from 'generate_video' (which likely doesn't start from an image) and 'generate_video_from_video' (which starts from video).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when-to-use guidance: 'Use upload_file first if you have a local image.' This gives clear prerequisites and distinguishes from alternatives like 'upload_file' for local files. It also implies this tool is for image-to-video conversion specifically.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raveenb/fal-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server