hailuo_generate_video
Generate AI video from text prompts. Describe the scene, motion, and style to create high-quality videos without needing reference images.
Instructions
Generate AI video from a text prompt using Hailuo (MiniMax).
This is the simplest way to create video - just describe what you want and Hailuo
will generate a high-quality AI video.
Use this when:
- You want to create a video from a text description
- You don't have reference images
- You want quick text-to-video generation
For using a reference image, use hailuo_generate_video_from_image instead.
Returns:
Task ID and generated video information including URLs and status.Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Description of the video to generate. Be descriptive about the scene, motion, style, and mood. Examples: 'A cat walking through a garden with butterflies', 'Ocean waves crashing on a beach at sunset', 'A futuristic city with flying cars' | |
| model | No | Video generation model. Options: 'minimax-t2v' (text-to-video, default), 'minimax-i2v' (image-to-video, requires first_image_url), 'minimax-i2v-director' (director-mode image-to-video, requires first_image_url). | minimax-t2v |
| callback_url | No | Webhook callback URL for asynchronous notifications. When provided, the API will call this URL when the video is generated. |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |
Implementation Reference
- tools/video_tools.py:13-59 (handler)The main handler function for the 'hailuo_generate_video' tool. Decorated with @mcp.tool(), it accepts a prompt (text description), an optional model (default 'minimax-t2v'), and an optional callback_url. It builds a payload dict with action='generate', prompt, and model, then calls client.generate_video() and formats the result.
@mcp.tool() async def hailuo_generate_video( prompt: Annotated[ str, Field( description="Description of the video to generate. Be descriptive about the scene, motion, style, and mood. Examples: 'A cat walking through a garden with butterflies', 'Ocean waves crashing on a beach at sunset', 'A futuristic city with flying cars'" ), ], model: Annotated[ HailuoModel, Field( description="Video generation model. Options: 'minimax-t2v' (text-to-video, default), 'minimax-i2v' (image-to-video, requires first_image_url), 'minimax-i2v-director' (director-mode image-to-video, requires first_image_url)." ), ] = DEFAULT_MODEL, callback_url: Annotated[ str | None, Field( description="Webhook callback URL for asynchronous notifications. When provided, the API will call this URL when the video is generated." ), ] = None, ) -> str: """Generate AI video from a text prompt using Hailuo (MiniMax). This is the simplest way to create video - just describe what you want and Hailuo will generate a high-quality AI video. Use this when: - You want to create a video from a text description - You don't have reference images - You want quick text-to-video generation For using a reference image, use hailuo_generate_video_from_image instead. Returns: Task ID and generated video information including URLs and status. """ payload: dict = { "action": "generate", "prompt": prompt, "model": model, } if callback_url: payload["callback_url"] = callback_url result = await client.generate_video(**payload) return format_video_result(result) - core/types.py:1-13 (schema)Type definitions used by the tool: HailuoModel literal type (minimax-t2v, minimax-i2v, minimax-i2v-director) and DEFAULT_MODEL constant ('minimax-t2v'). These define the valid model parameter values.
"""Type definitions for Hailuo MCP server.""" from typing import Literal # Hailuo video models HailuoModel = Literal[ "minimax-t2v", "minimax-i2v", "minimax-i2v-director", ] # Default model DEFAULT_MODEL: HailuoModel = "minimax-t2v" - core/client.py:166-179 (registration)The client.generate_video() method called by the handler. It sends a POST request to the '/hailuo/videos' endpoint with the given payload, automatically adding an async callback URL if none is provided, and returns the API response.
async def generate_video(self, **kwargs: Any) -> dict[str, Any]: """Generate video using the videos endpoint.""" logger.info(f"Generating video with model: {kwargs.get('model', 'minimax-t2v')}") return await self.request("/hailuo/videos", self._with_async_callback(kwargs)) async def query_task(self, **kwargs: Any) -> dict[str, Any]: """Query task status using the tasks endpoint.""" task_id = kwargs.get("id") or kwargs.get("ids", []) logger.info(f"Querying task(s): {task_id}") return await self.request("/hailuo/tasks", kwargs) # Global client instance client = HailuoClient() - core/server.py:47-59 (registration)The MCP FastMCP server instance created in core/server.py. The @mcp.tool() decorator in video_tools.py registers 'hailuo_generate_video' as an MCP tool on this server.
# Initialize FastMCP server mcp = FastMCP( settings.server_name, icons=[Icon(src="", mimeType="image/png")], **mcp_kwargs, ) logger.info(f"Initialized MCP server: {settings.server_name}") - core/utils.py:58-71 (helper)The format_video_result() helper function called by the handler. It formats the API response as JSON and adds submission guidance (poll_tool='hailuo_get_task', batch_poll_tool='hailuo_get_tasks_batch') for async polling.
def format_video_result(data: dict[str, Any]) -> str: """Format video generation result as JSON. Args: data: API response dictionary Returns: JSON string representation of the result """ return json.dumps( _with_submission_guidance(data, "hailuo_get_task", "hailuo_get_tasks_batch"), ensure_ascii=False, indent=2, )