Skip to main content
Glama

generate_video_from_image

Animate static images into videos using text prompts to guide motion and effects. Transform photos into dynamic content by describing desired animation sequences.

Instructions

Animate an image into a video. The image serves as the starting frame and the prompt guides the animation. Use upload_file first if you have a local image.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
image_urlYesURL of the image to animate (use upload_file for local images)
promptYesText description guiding how to animate the image (e.g., 'camera slowly pans right, gentle breeze moves the leaves')
modelNoImage-to-video model. Options: fal-ai/wan-i2v, fal-ai/kling-video/v2.1/standard/image-to-videofal-ai/wan-i2v
durationNoVideo duration in seconds
aspect_ratioNoVideo aspect ratio (e.g., '16:9', '9:16', '1:1')16:9
negative_promptNoWhat to avoid in the video (e.g., 'blur, distort, low quality')
cfg_scaleNoClassifier-free guidance scale (0.0-1.0). Lower values give more creative results.

Implementation Reference

  • The handler function that implements the core logic for the 'generate_video_from_image' tool. It processes input arguments, resolves the model ID, prepares Fal.ai API arguments (requiring image_url and prompt), executes the request via queue_strategy with timeout handling, extracts the video URL from the response, and returns success or error messages.
    async def handle_generate_video_from_image( arguments: Dict[str, Any], registry: ModelRegistry, queue_strategy: QueueStrategy, ) -> List[TextContent]: """Handle the generate_video_from_image tool.""" model_input = arguments.get("model", "fal-ai/wan-i2v") try: model_id = await registry.resolve_model_id(model_input) except ValueError as e: return [ TextContent( type="text", text=f"❌ {e}. Use list_models to see available options.", ) ] # Both image_url and prompt are required for this tool fal_args: Dict[str, Any] = { "image_url": arguments["image_url"], "prompt": arguments["prompt"], } if "duration" in arguments: fal_args["duration"] = arguments["duration"] if "aspect_ratio" in arguments: fal_args["aspect_ratio"] = arguments["aspect_ratio"] if "negative_prompt" in arguments: fal_args["negative_prompt"] = arguments["negative_prompt"] if "cfg_scale" in arguments: fal_args["cfg_scale"] = arguments["cfg_scale"] # Use queue strategy with timeout protection logger.info( "Starting image-to-video generation with %s from %s", model_id, ( arguments["image_url"][:50] + "..." if len(arguments["image_url"]) > 50 else arguments["image_url"] ), ) try: video_result = await asyncio.wait_for( queue_strategy.execute(model_id, fal_args, timeout=180), timeout=185, # Slightly longer than internal timeout ) except asyncio.TimeoutError: logger.error( "Image-to-video generation timed out after 180s. Model: %s, Image: %s", model_id, ( arguments["image_url"][:50] + "..." if len(arguments["image_url"]) > 50 else arguments["image_url"] ), ) return [ TextContent( type="text", text=f"❌ Video generation timed out after 180 seconds with {model_id}", ) ] if video_result is None: return [ TextContent( type="text", text=f"❌ Video generation failed or timed out with {model_id}", ) ] # Check for error in response if "error" in video_result: error_msg = video_result.get("error", "Unknown error") return [ TextContent( type="text", text=f"❌ Video generation failed: {error_msg}", ) ] # Extract video URL from result video_dict = video_result.get("video", {}) if isinstance(video_dict, dict): video_url = video_dict.get("url") else: video_url = video_result.get("url") if video_url: return [ TextContent( type="text", text=f"🎬 Video generated with {model_id}: {video_url}", ) ] return [ TextContent( type="text", text="❌ Video generation completed but no video URL was returned. Please try again.", ) ]
  • Defines the input schema and metadata for the 'generate_video_from_image' tool, including required parameters 'image_url' and 'prompt', optional model, duration, aspect_ratio, etc., and description.
    Tool( name="generate_video_from_image", description="Animate an image into a video. The image serves as the starting frame and the prompt guides the animation. Use upload_file first if you have a local image.", inputSchema={ "type": "object", "properties": { "image_url": { "type": "string", "description": "URL of the image to animate (use upload_file for local images)", }, "prompt": { "type": "string", "description": "Text description guiding how to animate the image (e.g., 'camera slowly pans right, gentle breeze moves the leaves')", }, "model": { "type": "string", "default": "fal-ai/wan-i2v", "description": "Image-to-video model. Options: fal-ai/wan-i2v, fal-ai/kling-video/v2.1/standard/image-to-video", }, "duration": { "type": "integer", "default": 5, "minimum": 2, "maximum": 10, "description": "Video duration in seconds", }, "aspect_ratio": { "type": "string", "default": "16:9", "description": "Video aspect ratio (e.g., '16:9', '9:16', '1:1')", }, "negative_prompt": { "type": "string", "description": "What to avoid in the video (e.g., 'blur, distort, low quality')", }, "cfg_scale": { "type": "number", "default": 0.5, "description": "Classifier-free guidance scale (0.0-1.0). Lower values give more creative results.", }, }, "required": ["image_url", "prompt"], }, ),
  • Registers the 'generate_video_from_image' tool by mapping its name to the handle_generate_video_from_image handler function in the TOOL_HANDLERS dictionary, used by the MCP server's call_tool method to route requests.
    TOOL_HANDLERS = { # Utility tools (no queue needed) "list_models": handle_list_models, "recommend_model": handle_recommend_model, "get_pricing": handle_get_pricing, "get_usage": handle_get_usage, "upload_file": handle_upload_file, # Image generation tools "generate_image": handle_generate_image, "generate_image_structured": handle_generate_image_structured, "generate_image_from_image": handle_generate_image_from_image, # Image editing tools "remove_background": handle_remove_background, "upscale_image": handle_upscale_image, "edit_image": handle_edit_image, "inpaint_image": handle_inpaint_image, "resize_image": handle_resize_image, "compose_images": handle_compose_images, # Video tools "generate_video": handle_generate_video, "generate_video_from_image": handle_generate_video_from_image, "generate_video_from_video": handle_generate_video_from_video, # Audio tools "generate_music": handle_generate_music, }
  • No, wrong fileId. Wait, this is import in server.py actually line 28 in server.py imports it.
    TextContent( type="text", text=f"❌ {e}. Use list_models to see available options.", ) ]
  • Exports the handle_generate_video_from_image function via __all__ for easy import in server files.
    "handle_generate_video_from_image",

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raveenb/fal-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server