luma_ray2_image
Convert images into videos by describing motion. Upload an image, specify movement, and generate videos in various durations and aspect ratios.
Instructions
Luma Ray 2 I2V - Latest Luma image-to-video
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| image_url | Yes | URL of the input image | |
| prompt | Yes | Motion description prompt | |
| duration | No | Video duration in seconds | 5 |
| aspect_ratio | No | 16:9 | |
| negative_prompt | No | What to avoid in the video | |
| cfg_scale | No | How closely to follow the prompt |
Implementation Reference
- src/index.ts:566-625 (handler)Core execution handler for the luma_ray2_image tool. Configures FAL client, calls the specific endpoint, processes video output with downloads/data URLs, and returns formatted result.private async handleImageToVideo(args: any, model: any) { const { image_url, prompt, duration = '5', aspect_ratio = '16:9', negative_prompt, cfg_scale } = args; try { // Configure FAL client lazily with query config override configureFalClient(this.currentQueryConfig); const inputParams: any = { image_url, prompt }; // Add optional parameters if (duration) inputParams.duration = duration; if (aspect_ratio) inputParams.aspect_ratio = aspect_ratio; if (negative_prompt) inputParams.negative_prompt = negative_prompt; if (cfg_scale !== undefined) inputParams.cfg_scale = cfg_scale; const result = await fal.subscribe(model.endpoint, { input: inputParams }); const videoData = result.data as FalVideoResult; const videoProcessed = await downloadAndProcessVideo(videoData.video.url, model.id); return { content: [ { type: 'text', text: JSON.stringify({ model: model.name, id: model.id, endpoint: model.endpoint, input_image: image_url, prompt, video: { url: videoData.video.url, localPath: videoProcessed.localPath, ...(videoProcessed.dataUrl && { dataUrl: videoProcessed.dataUrl }), width: videoData.video.width, height: videoData.video.height, }, metadata: inputParams, download_path: DOWNLOAD_PATH, data_url_settings: { enabled: ENABLE_DATA_URLS, max_size_mb: Math.round(MAX_DATA_URL_SIZE / 1024 / 1024), }, autoopen_settings: { enabled: AUTOOPEN, note: AUTOOPEN ? "Files automatically opened with default application" : "Auto-open disabled" }, }, null, 2), }, ], }; } catch (error) { throw new Error(`${model.name} generation failed: ${error}`); } }
- src/index.ts:380-390 (schema)Input schema definition for image-to-video tools including luma_ray2_image, specifying parameters like image_url, prompt, duration, etc.} else if (category === 'imageToVideo') { baseSchema.inputSchema.properties = { image_url: { type: 'string', description: 'URL of the input image' }, prompt: { type: 'string', description: 'Motion description prompt' }, duration: { type: 'string', enum: ['5', '10'], default: '5', description: 'Video duration in seconds' }, aspect_ratio: { type: 'string', enum: ['16:9', '9:16', '1:1'], default: '16:9' }, negative_prompt: { type: 'string', description: 'What to avoid in the video' }, cfg_scale: { type: 'number', default: 0.5, minimum: 0, maximum: 1, description: 'How closely to follow the prompt' } }; baseSchema.inputSchema.required = ['image_url', 'prompt']; }
- src/index.ts:119-127 (registration)MODEL_REGISTRY definition registering luma_ray2_image with its endpoint and metadata, used for tool discovery and dispatch.imageToVideo: [ { id: 'ltx_video', endpoint: 'fal-ai/ltx-video-13b-distilled/image-to-video', name: 'LTX Video', description: 'Fast and high-quality image-to-video conversion' }, { id: 'kling_master_image', endpoint: 'fal-ai/kling-video/v2.1/master/image-to-video', name: 'Kling 2.1 Master I2V', description: 'Premium image-to-video conversion' }, { id: 'pixverse_image', endpoint: 'fal-ai/pixverse/v4.5/image-to-video', name: 'Pixverse V4.5 I2V', description: 'Advanced image-to-video' }, { id: 'wan_pro_image', endpoint: 'fal-ai/wan-pro/image-to-video', name: 'Wan Pro I2V', description: 'Professional image animation' }, { id: 'hunyuan_image', endpoint: 'fal-ai/hunyuan-video-image-to-video', name: 'Hunyuan I2V', description: 'Open-source image-to-video' }, { id: 'vidu_image', endpoint: 'fal-ai/vidu/image-to-video', name: 'Vidu I2V', description: 'High-quality image animation' }, { id: 'luma_ray2_image', endpoint: 'fal-ai/luma-dream-machine/ray-2/image-to-video', name: 'Luma Ray 2 I2V', description: 'Latest Luma image-to-video' } ]
- src/index.ts:404-408 (registration)Registration logic in list_tools handler that dynamically creates the tool schema for luma_ray2_image using generateToolSchema.tools.push(this.generateToolSchema(model, 'textToVideo')); } for (const model of MODEL_REGISTRY.imageToVideo) { tools.push(this.generateToolSchema(model, 'imageToVideo')); }
- src/index.ts:476-482 (handler)Dispatch logic in CallToolRequestSchema handler that identifies luma_ray2_image as imageToVideo category and calls the appropriate handler.if (MODEL_REGISTRY.imageGeneration.find(m => m.id === name)) { return await this.handleImageGeneration(args, model); } else if (MODEL_REGISTRY.textToVideo.find(m => m.id === name)) { return await this.handleTextToVideo(args, model); } else if (MODEL_REGISTRY.imageToVideo.find(m => m.id === name)) { return await this.handleImageToVideo(args, model); }