generate_music
Create full songs or instrumental tracks from text descriptions using AI. Supports Turkish and English lyrics across multiple genres.
Instructions
Generate music using AI (powered by Suno v4).
Create full songs with vocals or instrumental tracks from text descriptions. Supports Turkish and English lyrics. Cost: ~14 credits per track.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Description of the music to generate (genre, mood, lyrics) | |
| style | No | Music genre (pop, rock, electronic, classical, lo-fi, ambient) | pop |
| instrumental | No | If True, generate without vocals |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/yaparai/tools/generate.py:110-151 (handler)The main async function that implements the 'generate_music' tool logic. It accepts a prompt, optional style (pop/rock/electronic/classical/lo-fi/ambient), and an instrumental flag. It calls YaparAIClient.generate() with type='music' and mode='suno_music', then waits for the result and returns audio_url.
async def generate_music( prompt: str, style: Literal["pop", "rock", "electronic", "classical", "lo-fi", "ambient"] = "pop", instrumental: bool = False, ) -> dict: """ Generate music using AI (powered by Suno v4). Create full songs with vocals or instrumental tracks from text descriptions. Supports Turkish and English lyrics. Cost: ~14 credits per track. Args: prompt: Description of the music to generate (genre, mood, lyrics) style: Music genre (pop, rock, electronic, classical, lo-fi, ambient) instrumental: If True, generate without vocals Returns: Dict with job_id, status, result_url (audio URL when done), credits_used, and balance_remaining. """ client = YaparAIClient() full_prompt = prompt if instrumental: full_prompt = f"[Instrumental] {prompt}" if style: full_prompt = f"[{style}] {full_prompt}" job = await client.generate({ "type": "music", "prompt": full_prompt, "mode": "suno_music", }) result = await client.wait_for_result(job["job_id"], timeout=120) return { "status": "success", "audio_url": result.get("result_url"), "job_id": result.get("job_id"), "credits_used": job.get("credits_used"), "balance_remaining": job.get("balance_remaining"), } - Input type definitions for generate_music: prompt (str), style (Literal with 6 genres, default 'pop'), instrumental (bool, default False). Returns dict with status, audio_url, job_id, credits_used, balance_remaining.
async def generate_music( prompt: str, style: Literal["pop", "rock", "electronic", "classical", "lo-fi", "ambient"] = "pop", instrumental: bool = False, ) -> dict: - src/yaparai/server.py:125-126 (registration)Register 'generate_music' as an MCP tool on the FastMCP server instance via mcp.tool(generate_music).
mcp.tool(generate_music) mcp.tool(generate_music_video) - src/yaparai/server.py:24-29 (registration)Import of generate_music from the tools.generate module into the server for registration.
from yaparai.tools.generate import ( generate_image, generate_video, generate_music, generate_music_video, ) - src/yaparai/client.py:126-128 (helper)The generate() method on YaparAIClient that sends the music generation request (type='music', mode='suno_music') to the /v1/public/generate endpoint.
async def generate(self, request: dict) -> dict: """Start a generation job.""" return await self._request("POST", "/v1/public/generate", json=request)