generate_music
Create music from text descriptions by specifying genre, mood, and instruments. Generate audio tracks with customizable duration and lyrics support.
Instructions
Generate music from text descriptions. Use list_models with category='audio' to discover available models.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Description of the music (genre, mood, instruments) | |
| model | No | Model ID (e.g., 'fal-ai/lyria2', 'fal-ai/stable-audio-25/text-to-audio'). Use list_models to see options. | fal-ai/lyria2 |
| duration_seconds | No | Duration in seconds | |
| negative_prompt | No | What to avoid in the audio (e.g., 'vocals, distortion, noise') | |
| lyrics_prompt | No | Lyrics for vocal music generation. Only used with models that support lyrics (e.g., MiniMax). Format: [verse]\nLyric line 1\n[chorus]\nChorus line |
Implementation Reference
- The main handler function that executes the generate_music tool logic: resolves model, prepares args, executes via queue_strategy, extracts and returns audio URL or error messages.async def handle_generate_music( arguments: Dict[str, Any], registry: ModelRegistry, queue_strategy: QueueStrategy, ) -> List[TextContent]: """Handle the generate_music tool.""" model_input = arguments.get("model", "fal-ai/lyria2") try: model_id = await registry.resolve_model_id(model_input) except ValueError as e: return [ TextContent( type="text", text=f"❌ {e}. Use list_models to see available options.", ) ] duration = arguments.get("duration_seconds", 30) # Build arguments for the music model music_args: Dict[str, Any] = { "prompt": arguments["prompt"], "duration_seconds": duration, } # Add optional parameters if provided if "negative_prompt" in arguments: music_args["negative_prompt"] = arguments["negative_prompt"] if "lyrics_prompt" in arguments: music_args["lyrics_prompt"] = arguments["lyrics_prompt"] # Use queue strategy with timeout protection logger.info("Starting music generation with %s (%ds)", model_id, duration) try: music_result = await asyncio.wait_for( queue_strategy.execute(model_id, music_args, timeout=120), timeout=125, # Slightly longer than internal timeout ) except asyncio.TimeoutError: return [ TextContent( type="text", text=f"❌ Music generation timed out after 120 seconds. Model: {model_id}", ) ] if music_result is None: return [ TextContent( type="text", text=f"❌ Music generation failed or timed out with {model_id}", ) ] # Check for error in response if "error" in music_result: error_msg = music_result.get("error", "Unknown error") return [ TextContent( type="text", text=f"❌ Music generation failed: {error_msg}", ) ] # Extract audio URL from result audio_dict = music_result.get("audio", {}) if isinstance(audio_dict, dict): audio_url = audio_dict.get("url") else: audio_url = music_result.get("audio_url") if audio_url: return [ TextContent( type="text", text=f"🎵 Music generated with {model_id}: {audio_url}", ) ] return [ TextContent( type="text", text="❌ Music generation completed but no audio URL was returned. Please try again.", ) ]
- Defines the input schema, description, and Tool object for the generate_music tool.AUDIO_TOOLS: List[Tool] = [ Tool( name="generate_music", description="Generate music from text descriptions. Use list_models with category='audio' to discover available models.", inputSchema={ "type": "object", "properties": { "prompt": { "type": "string", "description": "Description of the music (genre, mood, instruments)", }, "model": { "type": "string", "default": "fal-ai/lyria2", "description": "Model ID (e.g., 'fal-ai/lyria2', 'fal-ai/stable-audio-25/text-to-audio'). Use list_models to see options.", }, "duration_seconds": { "type": "integer", "default": 30, "minimum": 5, "maximum": 300, "description": "Duration in seconds", }, "negative_prompt": { "type": "string", "description": "What to avoid in the audio (e.g., 'vocals, distortion, noise')", }, "lyrics_prompt": { "type": "string", "description": "Lyrics for vocal music generation. Only used with models that support lyrics (e.g., MiniMax). Format: [verse]\\nLyric line 1\\n[chorus]\\nChorus line", }, }, "required": ["prompt"], }, ), ]
- src/fal_mcp_server/tools/__init__.py:14-16 (registration)Registers the generate_music schema by including AUDIO_TOOLS in the complete list of available tools (ALL_TOOLS).ALL_TOOLS = ( UTILITY_TOOLS + IMAGE_TOOLS + IMAGE_EDITING_TOOLS + VIDEO_TOOLS + AUDIO_TOOLS )
- src/fal_mcp_server/server.py:61-85 (registration)Registers the handler mapping for generate_music in the TOOL_HANDLERS dictionary used by the MCP server to route tool calls.TOOL_HANDLERS = { # Utility tools (no queue needed) "list_models": handle_list_models, "recommend_model": handle_recommend_model, "get_pricing": handle_get_pricing, "get_usage": handle_get_usage, "upload_file": handle_upload_file, # Image generation tools "generate_image": handle_generate_image, "generate_image_structured": handle_generate_image_structured, "generate_image_from_image": handle_generate_image_from_image, # Image editing tools "remove_background": handle_remove_background, "upscale_image": handle_upscale_image, "edit_image": handle_edit_image, "inpaint_image": handle_inpaint_image, "resize_image": handle_resize_image, "compose_images": handle_compose_images, # Video tools "generate_video": handle_generate_video, "generate_video_from_image": handle_generate_video_from_image, "generate_video_from_video": handle_generate_video_from_video, # Audio tools "generate_music": handle_generate_music, }