ltx_video
Convert images to high-quality videos by specifying motion prompts, duration, and aspect ratio with the FAL Image/Video MCP Server. Ideal for creating dynamic visual content from static images.
Instructions
LTX Video - Fast and high-quality image-to-video conversion
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| aspect_ratio | No | 16:9 | |
| cfg_scale | No | How closely to follow the prompt | |
| duration | No | Video duration in seconds | 5 |
| image_url | Yes | URL of the input image | |
| negative_prompt | No | What to avoid in the video | |
| prompt | Yes | Motion description prompt |
Implementation Reference
- src/index.ts:566-625 (handler)Main handler executing the tool logic for 'ltx_video' (image-to-video category): configures FAL client, calls fal.subscribe on the model endpoint, downloads/processes video output.private async handleImageToVideo(args: any, model: any) { const { image_url, prompt, duration = '5', aspect_ratio = '16:9', negative_prompt, cfg_scale } = args; try { // Configure FAL client lazily with query config override configureFalClient(this.currentQueryConfig); const inputParams: any = { image_url, prompt }; // Add optional parameters if (duration) inputParams.duration = duration; if (aspect_ratio) inputParams.aspect_ratio = aspect_ratio; if (negative_prompt) inputParams.negative_prompt = negative_prompt; if (cfg_scale !== undefined) inputParams.cfg_scale = cfg_scale; const result = await fal.subscribe(model.endpoint, { input: inputParams }); const videoData = result.data as FalVideoResult; const videoProcessed = await downloadAndProcessVideo(videoData.video.url, model.id); return { content: [ { type: 'text', text: JSON.stringify({ model: model.name, id: model.id, endpoint: model.endpoint, input_image: image_url, prompt, video: { url: videoData.video.url, localPath: videoProcessed.localPath, ...(videoProcessed.dataUrl && { dataUrl: videoProcessed.dataUrl }), width: videoData.video.width, height: videoData.video.height, }, metadata: inputParams, download_path: DOWNLOAD_PATH, data_url_settings: { enabled: ENABLE_DATA_URLS, max_size_mb: Math.round(MAX_DATA_URL_SIZE / 1024 / 1024), }, autoopen_settings: { enabled: AUTOOPEN, note: AUTOOPEN ? "Files automatically opened with default application" : "Auto-open disabled" }, }, null, 2), }, ], }; } catch (error) { throw new Error(`${model.name} generation failed: ${error}`); } }
- src/index.ts:380-389 (schema)Input schema definition used for 'ltx_video' tool (shared for image-to-video category).} else if (category === 'imageToVideo') { baseSchema.inputSchema.properties = { image_url: { type: 'string', description: 'URL of the input image' }, prompt: { type: 'string', description: 'Motion description prompt' }, duration: { type: 'string', enum: ['5', '10'], default: '5', description: 'Video duration in seconds' }, aspect_ratio: { type: 'string', enum: ['16:9', '9:16', '1:1'], default: '16:9' }, negative_prompt: { type: 'string', description: 'What to avoid in the video' }, cfg_scale: { type: 'number', default: 0.5, minimum: 0, maximum: 1, description: 'How closely to follow the prompt' } }; baseSchema.inputSchema.required = ['image_url', 'prompt'];
- src/index.ts:120-120 (registration)Tool registration: defines 'ltx_video' model ID, endpoint, name, and description in MODEL_REGISTRY.imageToVideo.{ id: 'ltx_video', endpoint: 'fal-ai/ltx-video-13b-distilled/image-to-video', name: 'LTX Video', description: 'Fast and high-quality image-to-video conversion' },
- src/index.ts:406-408 (registration)Dynamic registration of 'ltx_video' tool in the list_tools MCP handler.for (const model of MODEL_REGISTRY.imageToVideo) { tools.push(this.generateToolSchema(model, 'imageToVideo')); }
- src/index.ts:481-482 (handler)Dispatch logic in CallToolRequestSchema handler that routes 'ltx_video' calls to handleImageToVideo.return await this.handleImageToVideo(args, model); }