Skip to main content
Glama
ilhankilic

YaparAI MCP Server

by ilhankilic

generate_music

Create full songs or instrumental tracks from text descriptions using AI. Supports Turkish and English lyrics across multiple genres.

Instructions

Generate music using AI (powered by Suno v4).

Create full songs with vocals or instrumental tracks from text descriptions. Supports Turkish and English lyrics. Cost: ~14 credits per track.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesDescription of the music to generate (genre, mood, lyrics)
styleNoMusic genre (pop, rock, electronic, classical, lo-fi, ambient)pop
instrumentalNoIf True, generate without vocals

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main async function that implements the 'generate_music' tool logic. It accepts a prompt, optional style (pop/rock/electronic/classical/lo-fi/ambient), and an instrumental flag. It calls YaparAIClient.generate() with type='music' and mode='suno_music', then waits for the result and returns audio_url.
    async def generate_music(
        prompt: str,
        style: Literal["pop", "rock", "electronic", "classical", "lo-fi", "ambient"] = "pop",
        instrumental: bool = False,
    ) -> dict:
        """
        Generate music using AI (powered by Suno v4).
    
        Create full songs with vocals or instrumental tracks from text descriptions.
        Supports Turkish and English lyrics.
        Cost: ~14 credits per track.
    
        Args:
            prompt: Description of the music to generate (genre, mood, lyrics)
            style: Music genre (pop, rock, electronic, classical, lo-fi, ambient)
            instrumental: If True, generate without vocals
    
        Returns:
            Dict with job_id, status, result_url (audio URL when done),
            credits_used, and balance_remaining.
        """
        client = YaparAIClient()
        full_prompt = prompt
        if instrumental:
            full_prompt = f"[Instrumental] {prompt}"
        if style:
            full_prompt = f"[{style}] {full_prompt}"
    
        job = await client.generate({
            "type": "music",
            "prompt": full_prompt,
            "mode": "suno_music",
        })
    
        result = await client.wait_for_result(job["job_id"], timeout=120)
        return {
            "status": "success",
            "audio_url": result.get("result_url"),
            "job_id": result.get("job_id"),
            "credits_used": job.get("credits_used"),
            "balance_remaining": job.get("balance_remaining"),
        }
  • Input type definitions for generate_music: prompt (str), style (Literal with 6 genres, default 'pop'), instrumental (bool, default False). Returns dict with status, audio_url, job_id, credits_used, balance_remaining.
    async def generate_music(
        prompt: str,
        style: Literal["pop", "rock", "electronic", "classical", "lo-fi", "ambient"] = "pop",
        instrumental: bool = False,
    ) -> dict:
  • Register 'generate_music' as an MCP tool on the FastMCP server instance via mcp.tool(generate_music).
    mcp.tool(generate_music)
    mcp.tool(generate_music_video)
  • Import of generate_music from the tools.generate module into the server for registration.
    from yaparai.tools.generate import (
        generate_image,
        generate_video,
        generate_music,
        generate_music_video,
    )
  • The generate() method on YaparAIClient that sends the music generation request (type='music', mode='suno_music') to the /v1/public/generate endpoint.
    async def generate(self, request: dict) -> dict:
        """Start a generation job."""
        return await self._request("POST", "/v1/public/generate", json=request)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description must disclose behavior. It mentions cost and AI model but lacks details on output handling, rate limits, or whether tracks are saved or returned. Incomplete for a generation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise, with four sentences covering purpose, features, and cost. No fluff, but could benefit from structured bullet points for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, description may omit return details, but it lacks context on generation duration, success/failure indicators, or how to retrieve the generated music. Adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. Description adds context about Turkish and English lyrics (relevant to prompt) and cost, but no additional semantic details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates music using AI, with options for vocals or instrumental, which distinguishes it from siblings like generate_music_video. However, it could be more specific about what 'full songs' entails.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like generate_text or generate_image. It provides cost and language support but no direct comparison to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ilhankilic/yaparai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server