add_audio
Enhance video generations by integrating audio using a provided prompt and generation ID. Ideal for creating multimedia content with synchronized soundtracks on the MCP server.
Instructions
Adds audio to a video generation
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| callback_url | No | ||
| generation_id | Yes | ||
| negative_prompt | No | ||
| prompt | Yes |
Implementation Reference
- src/luma_ai_mcp_server/server.py:401-427 (handler)The main asynchronous handler function for the 'add_audio' tool. It validates inputs, constructs the request to the Luma API's /generations/{generation_id}/audio endpoint, and returns the result status.async def add_audio(parameters: dict) -> str: """Add audio to a video generation.""" try: generation_id = parameters.get("generation_id") if not generation_id: raise ValueError("generation_id parameter is required") prompt = parameters.get("prompt") if not prompt: raise ValueError("prompt parameter is required") request_data = {"generation_type": "add_audio", "prompt": prompt} if "negative_prompt" in parameters: request_data["negative_prompt"] = parameters["negative_prompt"] result = await _make_luma_request( "POST", f"/generations/{generation_id}/audio", request_data ) return ( f"Audio generation initiated for generation {generation_id}\n" f"Status: {result['state']}\n" f"Prompt: {prompt}" ) except Exception as e: logger.error(f"Error in add_audio: {str(e)}", exc_info=True) return f"Error adding audio to generation {generation_id}: {str(e)}"
- Pydantic BaseModel defining the input schema for the add_audio tool, including required generation_id and prompt, and optional negative_prompt and callback_url.class AddAudioInput(BaseModel): generation_id: str prompt: str negative_prompt: Optional[str] = None callback_url: Optional[str] = None
- src/luma_ai_mcp_server/server.py:528-532 (registration)Registration of the 'add_audio' tool in the MCP server's list_tools() method, specifying name, description, and input schema.Tool( name=LumaTools.ADD_AUDIO, description="Adds audio to a video generation", inputSchema=AddAudioInput.model_json_schema(), ),
- src/luma_ai_mcp_server/server.py:579-581 (registration)Dispatch handler in the call_tool() method that routes calls to the add_audio function.case LumaTools.ADD_AUDIO: result = await add_audio(arguments) return [TextContent(type="text", text=result)]
- Enum defining the tool names, including ADD_AUDIO = 'add_audio' used in registration and dispatch.class LumaTools(str, Enum): PING = "ping" CREATE_GENERATION = "create_generation" GET_GENERATION = "get_generation" LIST_GENERATIONS = "list_generations" DELETE_GENERATION = "delete_generation" UPSCALE_GENERATION = "upscale_generation" ADD_AUDIO = "add_audio" GENERATE_IMAGE = "generate_image" GET_CREDITS = "get_credits" GET_CAMERA_MOTIONS = "get_camera_motions"