Skip to main content
Glama

neuroverse_synthesize

Convert text to speech audio files using Coqui TTS technology. Specify text and language to generate spoken audio output for multilingual applications.

Instructions

Synthesize text to speech using Coqui TTS.

Args:

  • text (string): Text to synthesize

  • language (string): Language code

Returns: JSON with the path to the generated audio file

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYesText to synthesize into speech
languageNoLanguage code (e.g. en, ta, hi)en

Implementation Reference

  • The core implementation of the speech synthesis logic used by the "neuroverse_synthesize" tool.
    export async function synthesizeSpeech(text: string, language: string = "en"): Promise<string> {
      if (!config.coquiEndpoint) throw new Error("Coqui TTS endpoint not configured.");
      if (!config.outputDirectory) throw new Error("Output directory not configured.");
    
      if (!existsSync(config.outputDirectory)) {
        mkdirSync(config.outputDirectory, { recursive: true });
      }
    
      const outputPath = join(config.outputDirectory, `${randomUUID()}.wav`);
    
      try {
        const response = await axios.get(
          config.coquiEndpoint,
          {
            params: { text, language_id: language },
            responseType: "arraybuffer", // For downloading binary audio
          }
        );
    
        writeFileSync(outputPath, response.data);
        return outputPath;
      } catch (e) {
        const error = e as Error;
        throw new Error(`TTS generation failed: ${error.message}`);
      }
    }
  • Registration of the "neuroverse_synthesize" tool, which invokes the synthesizeSpeech handler.
    server.registerTool(
      "neuroverse_synthesize",
      {
        title: "Synthesize Speech",
        description: `Synthesize text to speech using Coqui TTS.
    
    Args:
      - text (string): Text to synthesize
      - language (string): Language code
    
    Returns:
      JSON with the path to the generated audio file`,
        inputSchema: SynthesizeSchema,
        annotations: {
          readOnlyHint: false,
          destructiveHint: false,
          idempotentHint: false,
          openWorldHint: true,
        },
      },
      async (params) => {
        const audio_path = await synthesizeSpeech(params.text, params.language);
        return {
          content: [{ type: "text" as const, text: JSON.stringify({ audio_path }, null, 2) }],
        };
  • The input schema for the "neuroverse_synthesize" tool.
    const SynthesizeSchema = z
      .object({
        text: z.string().min(1).describe("Text to synthesize into speech"),
        language: z.string().default("en").describe("Language code (e.g. en, ta, hi)"),
      })
      .strict();

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/joshua400/neuroverse'

If you have feedback or need assistance with the MCP directory API, please join our Discord server