pixverse_text
Generate videos from text prompts with customizable duration and aspect ratios using advanced AI models. Automatically download outputs to your local machine for easy use.
Instructions
Pixverse V4.5 - Advanced text-to-video generation
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| aspect_ratio | No | 16:9 | |
| duration | No | ||
| prompt | Yes | Text prompt for video generation |
Implementation Reference
- src/index.ts:113-113 (registration)Registers the pixverse_text tool in the MODEL_REGISTRY.textToVideo array with its unique ID, FAL endpoint, name, and description.{ id: 'pixverse_text', endpoint: 'fal-ai/pixverse/v4.5/text-to-video', name: 'Pixverse V4.5', description: 'Advanced text-to-video generation' },
- src/index.ts:373-379 (schema)Dynamically generates the input schema for the pixverse_text tool (all text-to-video tools) including required 'prompt' and optional 'duration' and 'aspect_ratio' parameters.} else if (category === 'textToVideo') { baseSchema.inputSchema.properties = { prompt: { type: 'string', description: 'Text prompt for video generation' }, duration: { type: 'number', default: 5, minimum: 1, maximum: 30 }, aspect_ratio: { type: 'string', enum: ['16:9', '9:16', '1:1', '4:3', '3:4'], default: '16:9' }, }; baseSchema.inputSchema.required = ['prompt'];
- src/index.ts:403-404 (registration)During MCP tools/list request handling, registers the pixverse_text tool by generating and adding its schema to the available tools list.for (const model of MODEL_REGISTRY.textToVideo) { tools.push(this.generateToolSchema(model, 'textToVideo'));
- src/index.ts:467-482 (handler)In MCP tools/call request handler, looks up pixverse_text model by ID and dispatches to the specific handleTextToVideo execution function.const model = getModelById(name); if (!model) { throw new McpError( ErrorCode.MethodNotFound, `Unknown model: ${name}` ); } // Determine category and handle accordingly if (MODEL_REGISTRY.imageGeneration.find(m => m.id === name)) { return await this.handleImageGeneration(args, model); } else if (MODEL_REGISTRY.textToVideo.find(m => m.id === name)) { return await this.handleTextToVideo(args, model); } else if (MODEL_REGISTRY.imageToVideo.find(m => m.id === name)) { return await this.handleImageToVideo(args, model); }
- src/index.ts:627-675 (handler)Primary execution handler for pixverse_text: extracts args, calls fal.subscribe on 'fal-ai/pixverse/v4.5/text-to-video' endpoint, processes video output (download, data URL, auto-open), returns structured JSON response.private async handleTextToVideo(args: any, model: any) { const { prompt, duration = 5, aspect_ratio = '16:9' } = args; try { // Configure FAL client lazily with query config override configureFalClient(this.currentQueryConfig); const inputParams: any = { prompt }; if (duration) inputParams.duration = duration; if (aspect_ratio) inputParams.aspect_ratio = aspect_ratio; const result = await fal.subscribe(model.endpoint, { input: inputParams }); const videoData = result.data as FalVideoResult; const videoProcessed = await downloadAndProcessVideo(videoData.video.url, model.id); return { content: [ { type: 'text', text: JSON.stringify({ model: model.name, id: model.id, endpoint: model.endpoint, prompt, video: { url: videoData.video.url, localPath: videoProcessed.localPath, ...(videoProcessed.dataUrl && { dataUrl: videoProcessed.dataUrl }), width: videoData.video.width, height: videoData.video.height, }, metadata: inputParams, download_path: DOWNLOAD_PATH, data_url_settings: { enabled: ENABLE_DATA_URLS, max_size_mb: Math.round(MAX_DATA_URL_SIZE / 1024 / 1024), }, autoopen_settings: { enabled: AUTOOPEN, note: AUTOOPEN ? "Files automatically opened with default application" : "Auto-open disabled" }, }, null, 2), }, ], }; } catch (error) { throw new Error(`${model.name} generation failed: ${error}`); } }