Skip to main content
Glama

create_generation

Generate videos from text prompts, images, or existing videos using AI models, with customizable resolution, duration, and aspect ratio settings.

Instructions

Creates a new video generation from text, image, or existing video

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes
modelNoray-2
resolutionNo
durationNo
aspect_ratioNo
loopNo
keyframesNo
callback_urlNo

Implementation Reference

  • The handler function that implements the core logic for the 'create_generation' tool, validating inputs, making API request to Luma, and returning generation details.
    async def create_generation(params: dict) -> str: """Create a new generation.""" if "prompt" not in params: raise ValueError("prompt parameter is required") if "model" in params: model = params["model"] if isinstance(model, str): if model not in [m.value for m in VideoModel]: raise ValueError(f"Invalid model: {model}") elif isinstance(model, VideoModel): params["model"] = model.value if "aspect_ratio" in params: aspect_ratio = params["aspect_ratio"] if isinstance(aspect_ratio, str): if aspect_ratio not in [a.value for a in AspectRatio]: raise ValueError(f"Invalid aspect ratio: {aspect_ratio}") elif isinstance(aspect_ratio, AspectRatio): params["aspect_ratio"] = aspect_ratio.value if "keyframes" in params: keyframes = params["keyframes"] if not isinstance(keyframes, dict): raise ValueError("keyframes must be an object") if not any(key in keyframes for key in ["frame0", "frame1"]): raise ValueError("keyframes must contain frame0 or frame1") input_data = CreateGenerationInput(**params) request_data = input_data.model_dump(exclude_none=True) response = await _make_luma_request("POST", "/generations", request_data) if input_data.keyframes: output = [ f"Created advanced generation with ID: {response['id']}", f"State: {response['state']}", ] if "frame0" in input_data.keyframes: output.append("starting from an image") if "frame1" in input_data.keyframes: output.append("ending with an image") else: output = [ f"Created text-to-video generation with ID: {response['id']}", f"State: {response['state']}", ] return "\n".join(output)
  • Pydantic BaseModel defining the input schema and validation for the create_generation tool parameters.
    class CreateGenerationInput(BaseModel): """ Input parameters for video generation. """ prompt: str model: VideoModel = VideoModel.RAY_2 resolution: Optional[Resolution] = None duration: Optional[Duration] = None aspect_ratio: Optional[AspectRatio] = None loop: Optional[bool] = None keyframes: Optional[dict] = None callback_url: Optional[str] = None
  • Tool registration in the list_tools() function, specifying name, description, and input schema.
    Tool( name=LumaTools.CREATE_GENERATION, description="Creates a new video generation from text, image, or existing video", inputSchema=CreateGenerationInput.model_json_schema(), ),
  • Dispatch/case statement in call_tool() that invokes the create_generation handler.
    case LumaTools.CREATE_GENERATION: result = await create_generation(arguments) return [TextContent(type="text", text=result)]
  • Enum defining tool names, including CREATE_GENERATION constant.
    class LumaTools(str, Enum): PING = "ping" CREATE_GENERATION = "create_generation" GET_GENERATION = "get_generation" LIST_GENERATIONS = "list_generations" DELETE_GENERATION = "delete_generation" UPSCALE_GENERATION = "upscale_generation" ADD_AUDIO = "add_audio" GENERATE_IMAGE = "generate_image" GET_CREDITS = "get_credits" GET_CAMERA_MOTIONS = "get_camera_motions"

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bobtista/luma-ai-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server