Skip to main content
Glama

text_to_voice

Generate three voice previews from text using ElevenLabs technology. Saves audio files to a specified directory for voice design and testing purposes.

Instructions

Create voice previews from a text prompt. Creates three previews with slight variations. Saves the previews to a given directory. If no text is provided, the tool will auto-generate text.

Voice preview files are saved as: voice_design_(generated_voice_id)_(timestamp).mp3

Example file name: voice_design_Ya2J5uIa5Pq14DNPsbC1_20250403_164949.mp3

⚠️ COST WARNING: This tool makes an API call to ElevenLabs which may incur costs. Only use when explicitly requested by the user.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
output_directoryNo
textNo
voice_descriptionYes

Implementation Reference

  • The handler function that implements the 'text_to_voice' tool. It generates three voice previews from a description using the ElevenLabs API, saves the MP3 audio files, and returns the file paths and generated voice IDs.
    def text_to_voice(
        voice_description: str,
        text: str | None = None,
        output_directory: str | None = None,
    ) -> TextContent:
        if voice_description == "":
            make_error("Voice description is required.")
    
        previews = client.text_to_voice.create_previews(
            voice_description=voice_description,
            text=text,
            auto_generate_text=True if text is None else False,
        )
    
        output_path = make_output_path(output_directory, base_path)
    
        generated_voice_ids = []
        output_file_paths = []
    
        for preview in previews.previews:
            output_file_name = make_output_file(
                "voice_design", preview.generated_voice_id, output_path, "mp3", full_id=True
            )
            output_file_paths.append(str(output_file_name))
            generated_voice_ids.append(preview.generated_voice_id)
            audio_bytes = base64.b64decode(preview.audio_base_64)
    
            with open(output_path / output_file_name, "wb") as f:
                f.write(audio_bytes)
    
        return TextContent(
            type="text",
            text=f"Success. Files saved at: {', '.join(output_file_paths)}. Generated voice IDs are: {', '.join(generated_voice_ids)}",
        )
  • Registers the 'text_to_voice' tool with the MCP server using the @mcp.tool decorator, providing a detailed description that serves as the tool schema.
    @mcp.tool(
        description="""Create voice previews from a text prompt. Creates three previews with slight variations. Saves the previews to a given directory. If no text is provided, the tool will auto-generate text.
    
        Voice preview files are saved as: voice_design_(generated_voice_id)_(timestamp).mp3
    
        Example file name: voice_design_Ya2J5uIa5Pq14DNPsbC1_20250403_164949.mp3
    
        ⚠️ COST WARNING: This tool makes an API call to ElevenLabs which may incur costs. Only use when explicitly requested by the user.
        """
    )
  • Supporting tool 'create_voice_from_preview' that uses the generated voice ID from 'text_to_voice' to add the voice to the library.
    @mcp.tool(
        description="""Add a generated voice to the voice library. Uses the voice ID from the `text_to_voice` tool.
    
        ⚠️ COST WARNING: This tool makes an API call to ElevenLabs which may incur costs. Only use when explicitly requested by the user.
        """
    )
    def create_voice_from_preview(
        generated_voice_id: str,
        voice_name: str,
        voice_description: str,
    ) -> TextContent:
        voice = client.text_to_voice.create_voice_from_preview(
            voice_name=voice_name,
            voice_description=voice_description,
            generated_voice_id=generated_voice_id,
        )
    
        return TextContent(
            type="text",
            text=f"Success. Voice created: {voice.name} with ID:{voice.voice_id}",
        )
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels. It discloses key behavioral traits: creates three previews with variations, saves files with specific naming pattern (voice_design_(generated_voice_id)_(timestamp).mp3), auto-generates text if none provided, and includes a critical cost warning about ElevenLabs API calls. This covers mutation effects, output format, and operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise. It front-loads core functionality, follows with file naming details and example, and ends with critical warnings. Every sentence adds value: no repetition or fluff, making it efficient for agent comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, and no output schema, the description does an excellent job covering purpose, behavior, and usage. It explains what the tool does, how it behaves, and critical costs. However, it doesn't detail the return value or error handling, which could be useful for a mutation tool with external API calls.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaningful context for parameters: 'text' is clarified with auto-generation behavior, 'output_directory' is explained for saving files, and 'voice_description' is implied as required for voice generation. However, it doesn't detail format constraints or examples for parameters like voice_description, leaving some gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Create voice previews from a text prompt. Creates three previews with slight variations. Saves the previews to a given directory.' It specifies the action (create), resource (voice previews), and key details (three variations, saving behavior), distinguishing it from sibling tools like text_to_speech or voice_clone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'If no text is provided, the tool will auto-generate text' clarifies optional behavior, and '⚠️ COST WARNING: This tool makes an API call to ElevenLabs which may incur costs. Only use when explicitly requested by the user' gives clear when-to-use and cost considerations, distinguishing it from free alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/projectservan8n/elevenlabs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server