Skip to main content
Glama

text_to_speech

Convert written text into spoken audio using AI-generated voices for accessibility, content creation, or audio production needs.

Instructions

Convert text to speech using AI voices

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYesText to convert to speech
voice_idYesVoice model ID to use (use get_all_voices to find IDs)
webhook_urlNoURL for callback upon completion

Implementation Reference

  • The main handler function that executes the text_to_speech tool. Validates inputs, makes API call to /TextToSpeech endpoint, and returns task status information.
    private async handleTextToSpeech(args: any) {
      if (!args.text || !args.voice_id) {
        throw new McpError(ErrorCode.InvalidParams, "text and voice_id are required");
      }
    
      const response = await this.axiosInstance.post("/TextToSpeech", {
        text: args.text,
        voice_id: args.voice_id,
        webhook_url: args.webhook_url,
      });
    
      return {
        content: [
          {
            type: "text",
            text: `Text-to-speech conversion started!\n\n${JSON.stringify(response.data, null, 2)}\n\nUse get_conversion_by_id with the task_id to check the status.`,
          },
        ],
      };
    }
  • Input schema and metadata definition for the text_to_speech tool, including required parameters text and voice_id.
    {
      name: "text_to_speech",
      description: "Convert text to speech using AI voices",
      inputSchema: {
        type: "object" as const,
        properties: {
          text: {
            type: "string",
            description: "Text to convert to speech",
          },
          voice_id: {
            type: "string",
            description: "Voice model ID to use (use get_all_voices to find IDs)",
          },
          webhook_url: {
            type: "string",
            description: "URL for callback upon completion",
          },
        },
        required: ["text", "voice_id"],
      },
    },
  • src/index.ts:679-680 (registration)
    Registration of the text_to_speech handler in the central tool dispatch switch statement within CallToolRequestSchema handler.
    case "text_to_speech":
      return await this.handleTextToSpeech(args);
  • Enum constant 'TEXT_TO_SPEECH' used in conversionType for get_conversion_by_id helper tool to query TTS task status.
    "TEXT_TO_SPEECH",
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool converts text to speech but doesn't cover critical aspects like whether it's a synchronous or asynchronous operation (implied by the webhook_url parameter), rate limits, authentication needs, output format, or error handling. This leaves significant gaps for an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without any fluff or redundancy. It's appropriately sized and front-loaded, making it easy for an AI agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a text-to-speech tool with no annotations and no output schema, the description is insufficient. It lacks details on behavioral traits (e.g., async nature, audio format), doesn't reference related tools like 'get_all_voices', and provides no information on return values or error cases, leaving the agent with incomplete context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional semantic context beyond what's in the schema (e.g., it doesn't explain voice_id selection strategies or webhook_url usage scenarios), meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('convert') and resource ('text to speech'), and it adds context about using AI voices. However, it doesn't explicitly differentiate from sibling tools like 'voice_changer' or 'sing_over_instrumental', which also involve voice/audio processing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_all_voices' (which is referenced in the schema but not in the description) or explain scenarios where this tool is preferred over others such as 'generate_music' or 'transcribe_audio'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pasie15/mcp-server-musicgpt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server