Skip to main content
Glama

create_generation

Generate videos from text prompts, images, or existing videos using AI models, with customizable resolution, duration, and aspect ratio settings.

Instructions

Creates a new video generation from text, image, or existing video

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes
modelNoray-2
resolutionNo
durationNo
aspect_ratioNo
loopNo
keyframesNo
callback_urlNo

Implementation Reference

  • The handler function that implements the core logic for the 'create_generation' tool, validating inputs, making API request to Luma, and returning generation details.
    async def create_generation(params: dict) -> str:
        """Create a new generation."""
        if "prompt" not in params:
            raise ValueError("prompt parameter is required")
    
        if "model" in params:
            model = params["model"]
            if isinstance(model, str):
                if model not in [m.value for m in VideoModel]:
                    raise ValueError(f"Invalid model: {model}")
            elif isinstance(model, VideoModel):
                params["model"] = model.value
    
        if "aspect_ratio" in params:
            aspect_ratio = params["aspect_ratio"]
            if isinstance(aspect_ratio, str):
                if aspect_ratio not in [a.value for a in AspectRatio]:
                    raise ValueError(f"Invalid aspect ratio: {aspect_ratio}")
            elif isinstance(aspect_ratio, AspectRatio):
                params["aspect_ratio"] = aspect_ratio.value
    
        if "keyframes" in params:
            keyframes = params["keyframes"]
            if not isinstance(keyframes, dict):
                raise ValueError("keyframes must be an object")
            if not any(key in keyframes for key in ["frame0", "frame1"]):
                raise ValueError("keyframes must contain frame0 or frame1")
    
        input_data = CreateGenerationInput(**params)
        request_data = input_data.model_dump(exclude_none=True)
        response = await _make_luma_request("POST", "/generations", request_data)
    
        if input_data.keyframes:
            output = [
                f"Created advanced generation with ID: {response['id']}",
                f"State: {response['state']}",
            ]
            if "frame0" in input_data.keyframes:
                output.append("starting from an image")
            if "frame1" in input_data.keyframes:
                output.append("ending with an image")
        else:
            output = [
                f"Created text-to-video generation with ID: {response['id']}",
                f"State: {response['state']}",
            ]
    
        return "\n".join(output)
  • Pydantic BaseModel defining the input schema and validation for the create_generation tool parameters.
    class CreateGenerationInput(BaseModel):
        """
        Input parameters for video generation.
        """
    
        prompt: str
        model: VideoModel = VideoModel.RAY_2
        resolution: Optional[Resolution] = None
        duration: Optional[Duration] = None
        aspect_ratio: Optional[AspectRatio] = None
        loop: Optional[bool] = None
        keyframes: Optional[dict] = None
        callback_url: Optional[str] = None
  • Tool registration in the list_tools() function, specifying name, description, and input schema.
    Tool(
        name=LumaTools.CREATE_GENERATION,
        description="Creates a new video generation from text, image, or existing video",
        inputSchema=CreateGenerationInput.model_json_schema(),
    ),
  • Dispatch/case statement in call_tool() that invokes the create_generation handler.
    case LumaTools.CREATE_GENERATION:
        result = await create_generation(arguments)
        return [TextContent(type="text", text=result)]
  • Enum defining tool names, including CREATE_GENERATION constant.
    class LumaTools(str, Enum):
        PING = "ping"
        CREATE_GENERATION = "create_generation"
        GET_GENERATION = "get_generation"
        LIST_GENERATIONS = "list_generations"
        DELETE_GENERATION = "delete_generation"
        UPSCALE_GENERATION = "upscale_generation"
        ADD_AUDIO = "add_audio"
        GENERATE_IMAGE = "generate_image"
        GET_CREDITS = "get_credits"
        GET_CAMERA_MOTIONS = "get_camera_motions"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the basic action without disclosing behavioral traits. It lacks information on permissions, rate limits, whether it's asynchronous (suggested by 'callback_url' parameter), or what happens upon creation, leaving significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, with every word contributing to clarity, making it highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (8 parameters, no annotations, no output schema), the description is insufficient. It doesn't explain the creation process, output format, error handling, or how parameters interact, leaving the agent with inadequate context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter details. It mentions input types (text, image, video) but doesn't explain how they map to parameters like 'prompt' or 'keyframes', failing to provide meaningful semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('creates') and resource ('new video generation'), specifying it can be created from text, image, or existing video. However, it doesn't differentiate from sibling tools like 'generate_image' or 'upscale_generation' which also create visual content, missing explicit distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'generate_image' for images or 'upscale_generation' for enhancements. The description implies creation but offers no context on prerequisites, timing, or exclusions, leaving usage ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bobtista/luma-ai-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server