veo3
Generate AI-driven videos with customizable duration and aspect ratio using text prompts. Integrated into the FAL Image/Video MCP Server for high-performance rendering and automatic local downloads.
Instructions
Veo 3 - Google DeepMind's latest with speech and audio
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| aspect_ratio | No | 16:9 | |
| duration | No | ||
| prompt | Yes | Text prompt for video generation |
Implementation Reference
- src/index.ts:627-675 (handler)The handler function that implements the core execution logic for the 'veo3' tool. It extracts parameters, calls the FAL API via fal.subscribe('fal-ai/veo3'), processes the resulting video (downloads, data URLs, auto-open), and formats the response.private async handleTextToVideo(args: any, model: any) { const { prompt, duration = 5, aspect_ratio = '16:9' } = args; try { // Configure FAL client lazily with query config override configureFalClient(this.currentQueryConfig); const inputParams: any = { prompt }; if (duration) inputParams.duration = duration; if (aspect_ratio) inputParams.aspect_ratio = aspect_ratio; const result = await fal.subscribe(model.endpoint, { input: inputParams }); const videoData = result.data as FalVideoResult; const videoProcessed = await downloadAndProcessVideo(videoData.video.url, model.id); return { content: [ { type: 'text', text: JSON.stringify({ model: model.name, id: model.id, endpoint: model.endpoint, prompt, video: { url: videoData.video.url, localPath: videoProcessed.localPath, ...(videoProcessed.dataUrl && { dataUrl: videoProcessed.dataUrl }), width: videoData.video.width, height: videoData.video.height, }, metadata: inputParams, download_path: DOWNLOAD_PATH, data_url_settings: { enabled: ENABLE_DATA_URLS, max_size_mb: Math.round(MAX_DATA_URL_SIZE / 1024 / 1024), }, autoopen_settings: { enabled: AUTOOPEN, note: AUTOOPEN ? "Files automatically opened with default application" : "Auto-open disabled" }, }, null, 2), }, ], }; } catch (error) { throw new Error(`${model.name} generation failed: ${error}`); } }
- src/index.ts:373-379 (schema)Generates the input schema (parameters and validation) specifically for text-to-video tools like 'veo3'.} else if (category === 'textToVideo') { baseSchema.inputSchema.properties = { prompt: { type: 'string', description: 'Text prompt for video generation' }, duration: { type: 'number', default: 5, minimum: 1, maximum: 30 }, aspect_ratio: { type: 'string', enum: ['16:9', '9:16', '1:1', '4:3', '3:4'], default: '16:9' }, }; baseSchema.inputSchema.required = ['prompt'];
- src/index.ts:111-111 (registration)Registers the 'veo3' tool in the MODEL_REGISTRY, defining its ID, FAL endpoint, name, and description. This is used for tool listing and lookup.{ id: 'veo3', endpoint: 'fal-ai/veo3', name: 'Veo 3', description: 'Google DeepMind\'s latest with speech and audio' },
- src/index.ts:140-143 (helper)Helper function to look up the model configuration (endpoint, etc.) for the tool name 'veo3'.function getModelById(id: string) { const allModels = getAllModels(); return allModels.find(model => model.id === id); }
- src/index.ts:290-313 (helper)Supporting utility called by the handler to process the generated video: downloads to local path, converts to data URL if enabled, and auto-opens the file.async function downloadAndProcessVideo(videoUrl: string, modelName: string): Promise<any> { const filename = generateFilename('video', modelName); const localPath = await downloadFile(videoUrl, filename); const dataUrl = await urlToDataUrl(videoUrl); // Auto-open the downloaded video if available if (localPath) { await autoOpenFile(localPath); } const result: any = {}; // Only include localPath if download was successful if (localPath) { result.localPath = localPath; } // Only include dataUrl if it was successfully generated if (dataUrl) { result.dataUrl = dataUrl; } return result; }