Skip to main content
Glama

create_voice_from_preview

Save a generated voice preview to your ElevenLabs voice library for future use, allowing you to store custom voices created through text-to-speech conversion.

Instructions

Add a generated voice to the voice library. Uses the voice ID from the text_to_voice tool.

⚠️ COST WARNING: This tool makes an API call to ElevenLabs which may incur costs. Only use when explicitly requested by the user.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
generated_voice_idYes
voice_descriptionYes
voice_nameYes

Implementation Reference

  • The handler function for the 'create_voice_from_preview' MCP tool. It is decorated with @mcp.tool, which also serves as the registration. The function calls the ElevenLabs client API to create a voice from a preview and returns a success message.
    @mcp.tool(
        description="""Add a generated voice to the voice library. Uses the voice ID from the `text_to_voice` tool.
    
        ⚠️ COST WARNING: This tool makes an API call to ElevenLabs which may incur costs. Only use when explicitly requested by the user.
        """
    )
    def create_voice_from_preview(
        generated_voice_id: str,
        voice_name: str,
        voice_description: str,
    ) -> TextContent:
        voice = client.text_to_voice.create_voice_from_preview(
            voice_name=voice_name,
            voice_description=voice_description,
            generated_voice_id=generated_voice_id,
        )
    
        return TextContent(
            type="text",
            text=f"Success. Voice created: {voice.name} with ID:{voice.voice_id}",
        )
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key behavioral traits: it's a write operation ('Add a generated voice'), has external dependencies ('API call to ElevenLabs'), and includes important cost implications. The description doesn't cover rate limits, authentication needs, or what happens on failure, but provides substantial practical guidance beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with two focused sentences: the first states the core functionality, the second provides critical cost warning. Every word earns its place, with no redundancy or unnecessary elaboration. The warning emoji and formatting enhance clarity without adding bulk.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter write tool with no annotations and no output schema, the description provides good behavioral context (cost warning, dependency) but insufficient parameter guidance. It covers the 'why' and 'when' well but leaves gaps in the 'how' regarding parameter usage. Given the complexity of creating a voice resource, more detail on parameter expectations would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. While it mentions 'voice ID from the `text_to_voice` tool' (mapping to generated_voice_id), it provides no context for voice_name or voice_description parameters. The description adds minimal value beyond what's inferable from parameter titles, leaving two of three parameters without semantic explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Add a generated voice to the voice library') and resource ('voice library'), with specific reference to the source ('Uses the voice ID from the `text_to_voice` tool'). It distinguishes this tool from siblings like `get_voice` or `search_voice_library` by focusing on creation rather than retrieval. However, it doesn't explicitly contrast with other voice-related tools like `voice_clone` or `text_to_speech`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Uses the voice ID from the `text_to_voice` tool') and includes an explicit warning about costs ('⚠️ COST WARNING... Only use when explicitly requested by the user'). This gives strong guidance on prerequisites and user confirmation requirements. It doesn't explicitly name alternatives or specify when not to use it beyond the cost warning.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/projectservan8n/elevenlabs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server