generate_video
Generate AI videos from text or images using models like Veo 3.1 and Kling. Choose style presets to create cinematic, realistic, or artistic videos.
Instructions
Generate a video using AI.
Text-to-video or image-to-video. Models: Veo 3.1, Kling. Cost: ~350 credits per video.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Description of the video to generate | |
| image_url | No | Optional source image URL (for image-to-video mode) | |
| model | No | Model to use — "auto" (default, picks best), "veo" (Veo 3.1), or "kling" (Kling). Veo 3.1 is best for cinematic quality. | auto |
| style | No | Style preset (cinematic, realistic, artistic) |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/yaparai/tools/generate.py:61-107 (handler)The main handler function for the generate_video tool. It accepts a prompt (text-to-video), optional image_url (image-to-video), a model selector (auto/veo/kling), and an optional style. Depending on the model choice and presence of image_url, it sets the 'mode' to 'gemini_video', 'img2video', or 'text2video', then calls the YaparAIClient to submit a generation job and polls for the result.
async def generate_video( prompt: str, image_url: str | None = None, model: Literal["auto", "veo", "kling"] = "auto", style: Literal["cinematic", "realistic", "artistic"] | None = None, ) -> dict: """ Generate a video using AI. Text-to-video or image-to-video. Models: Veo 3.1, Kling. Cost: ~350 credits per video. Args: prompt: Description of the video to generate image_url: Optional source image URL (for image-to-video mode) model: Model to use — "auto" (default, picks best), "veo" (Veo 3.1), or "kling" (Kling). Veo 3.1 is best for cinematic quality. style: Style preset (cinematic, realistic, artistic) Returns: Dict with video_url, job_id, credits_used, and balance_remaining. """ client = YaparAIClient() if model == "veo": mode = "gemini_video" elif image_url: mode = "img2video" else: mode = "text2video" job = await client.generate({ "type": "video", "prompt": prompt, "mode": mode, "image_url": image_url, "style": style, }) result = await client.wait_for_result(job["job_id"], timeout=180) return { "status": "success", "video_url": result.get("result_url"), "job_id": result.get("job_id"), "credits_used": job.get("credits_used"), "balance_remaining": job.get("balance_remaining"), } - src/yaparai/tools/generate.py:61-66 (schema)The function signature and docstring for generate_video, defining input parameters (prompt, image_url, model with Literal types, style) and the return type dict with video_url, job_id, credits_used, balance_remaining.
async def generate_video( prompt: str, image_url: str | None = None, model: Literal["auto", "veo", "kling"] = "auto", style: Literal["cinematic", "realistic", "artistic"] | None = None, ) -> dict: - src/yaparai/server.py:26-26 (registration)Import of generate_video from yaparai.tools.generate into the server module.
generate_video, - src/yaparai/server.py:124-124 (registration)Registration of generate_video as an MCP tool on the FastMCP server via mcp.tool(generate_video).
mcp.tool(generate_video) - src/yaparai/client.py:126-128 (helper)The YaparAIClient.generate method used by generate_video to POST to /v1/public/generate with the job payload.
async def generate(self, request: dict) -> dict: """Start a generation job.""" return await self._request("POST", "/v1/public/generate", json=request)