Skip to main content
Glama
mmaudio

MMAudio MCP

Official
by mmaudio

text_to_audio

Convert text descriptions into AI-generated audio content including sound effects, ambient sounds, music, and atmospheric soundscapes.

Instructions

Generate AI-powered audio content from text descriptions using MMAudio technology. Create sound effects, ambient audio, music, and atmospheric soundscapes from natural language descriptions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesDescribe the audio you want to generate (e.g., "rain falling on leaves", "coffee shop ambiance", "futuristic sci-fi sounds")
durationNoDuration of generated audio in seconds
num_stepsNoNumber of inference steps (higher = better quality, slower)
cfg_strengthNoClassifier-free guidance strength (higher = more adherence to prompt)
negative_promptNoDescribe what you want to avoid in the generated audio (optional)
seedNoRandom seed for reproducible results

Implementation Reference

  • Executes the text_to_audio tool: validates input, calls external MMAudio API, handles errors, validates response, and returns audio generation result.
    async handleTextToAudio(args) {
      this.ensureConfigured();
    
      try {
        const input = TextToAudioInputSchema.parse(args);
        
        console.error(`[MMAudio] Starting text-to-audio generation for prompt: "${input.prompt}"`);
    
        const response = await fetch(`${this.config.baseUrl}/api/text-to-audio`, {
          method: 'POST',
          headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${this.config.apiKey}`,
            'User-Agent': 'MMAudio-MCP/1.0.0',
          },
          body: JSON.stringify(input),
          timeout: this.config.timeout,
        });
    
        if (!response.ok) {
          const errorText = await response.text();
          let errorMessage = `HTTP ${response.status}`;
          
          try {
            const errorData = JSON.parse(errorText);
            errorMessage = errorData.error || errorMessage;
          } catch {
            errorMessage = errorText || errorMessage;
          }
    
          if (response.status === 401) {
            throw new McpError(ErrorCode.InvalidRequest, 'Invalid API key. Please check your MMAudio API key.');
          } else if (response.status === 403) {
            throw new McpError(ErrorCode.InvalidRequest, 'Insufficient credits for text-to-audio generation.');
          } else if (response.status === 429) {
            throw new McpError(ErrorCode.InvalidRequest, 'Rate limit exceeded. Please try again later.');
          }
    
          throw new Error(errorMessage);
        }
    
        const result = await response.json();
        const validatedResult = TextToAudioResponseSchema.parse(result);
    
        console.error(`[MMAudio] Text-to-audio generation completed successfully`);
    
        return {
          content: [
            {
              type: 'text',
              text: JSON.stringify({
                success: true,
                message: 'Audio generated successfully from text',
                result: {
                  audio_url: validatedResult.audio.url,
                  content_type: validatedResult.audio.content_type,
                  file_name: validatedResult.audio.file_name,
                  file_size: validatedResult.audio.file_size,
                  duration: input.duration,
                  prompt: input.prompt,
                }
              }, null, 2),
            },
          ],
        };
      } catch (error) {
        if (error instanceof z.ZodError) {
          throw new McpError(
            ErrorCode.InvalidParams,
            `Invalid input parameters: ${error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`
          );
        }
        throw error;
      }
    }
  • Zod input schema for validating parameters of the text_to_audio tool.
    const TextToAudioInputSchema = z.object({
      prompt: z.string().min(1, 'Prompt is required'),
      duration: z.number().min(1).max(30).default(8),
      num_steps: z.number().int().min(1).max(50).default(25),
      cfg_strength: z.number().min(1).max(10).default(4.5),
      negative_prompt: z.string().optional().default(''),
      seed: z.number().int().optional().default(0),
    });
  • MCP tool registration in the ListTools response, including name, description, and input schema definition.
    {
      name: 'text_to_audio',
      description: 'Generate AI-powered audio content from text descriptions using MMAudio technology. Create sound effects, ambient audio, music, and atmospheric soundscapes from natural language descriptions.',
      inputSchema: {
        type: 'object',
        properties: {
          prompt: {
            type: 'string',
            description: 'Describe the audio you want to generate (e.g., "rain falling on leaves", "coffee shop ambiance", "futuristic sci-fi sounds")',
          },
          duration: {
            type: 'number',
            minimum: 1,
            maximum: 30,
            default: 8,
            description: 'Duration of generated audio in seconds',
          },
          num_steps: {
            type: 'integer',
            minimum: 1,
            maximum: 50,
            default: 25,
            description: 'Number of inference steps (higher = better quality, slower)',
          },
          cfg_strength: {
            type: 'number',
            minimum: 1,
            maximum: 10,
            default: 4.5,
            description: 'Classifier-free guidance strength (higher = more adherence to prompt)',
          },
          negative_prompt: {
            type: 'string',
            description: 'Describe what you want to avoid in the generated audio (optional)',
            default: '',
          },
          seed: {
            type: 'integer',
            default: 0,
            description: 'Random seed for reproducible results',
          },
        },
        required: ['prompt'],
      },
    },
    {
  • Zod response schema for validating the API response of text_to_audio tool (references shared AudioResponseSchema).
    const TextToAudioResponseSchema = z.object({
      audio: AudioResponseSchema,
    });
  • Dispatch case in CallToolRequest handler that routes text_to_audio calls to the handler function.
    case 'text_to_audio':
      return await this.handleTextToAudio(args);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'AI-powered' and 'MMAudio technology' but doesn't cover critical aspects like rate limits, authentication needs, output format, or potential costs/latency. The description is insufficient for a tool with 6 parameters and no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core functionality, and the second provides concrete examples. Every word earns its place with no redundancy or wasted text, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex audio generation tool with 6 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns (audio format, file type, size), error conditions, or practical constraints. The examples help but don't compensate for missing behavioral and output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description adds no specific parameter information beyond what's in the schema, meeting the baseline of 3 where the schema does the heavy lifting. No additional semantic context is provided for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate AI-powered audio content from text descriptions using MMAudio technology.' It specifies the action (generate), resource (audio content), and technology (MMAudio), and distinguishes itself from siblings by focusing on text-to-audio generation rather than validation or video conversion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through examples ('sound effects, ambient audio, music, and atmospheric soundscapes'), but lacks explicit guidance on when to use this tool versus alternatives like 'video_to_audio'. It provides context for generating audio from text but doesn't state exclusions or compare to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mmaudio/mmaudio-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server