Skip to main content
Glama

venice_text_to_speech

Convert text to speech audio using Venice AI's TTS models and voice options for accessible audio content creation.

Instructions

Convert text to speech audio using Venice AI

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYesText to convert to speech
modelNoTTS modeltts-kokoro
voiceNoVoice ID (e.g., af_sky, af_bella, am_adam)af_sky

Implementation Reference

  • The handler function for venice_text_to_speech tool. It sends a POST request to Venice AI's /audio/speech endpoint with the provided text, model, and voice parameters, handles errors, and returns the generated audio as base64-encoded data in the response.
    async ({ text, model, voice }) => { const response = await veniceAPI("/audio/speech", { method: "POST", body: JSON.stringify({ model, input: text, voice }) }); if (!response.ok) { const data = await response.json() as { error?: { message?: string } }; return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; } const arrayBuffer = await response.arrayBuffer(); const base64 = Buffer.from(arrayBuffer).toString("base64"); return { content: [{ type: "text" as const, text: `Audio generated (${Math.round(base64.length / 1024)}KB MP3): data:audio/mp3;base64,${base64.substring(0, 50)}...` }] }; }
  • Zod schema defining the input parameters for the venice_text_to_speech tool: text (required), model (optional, default 'tts-kokoro'), voice (optional, default 'af_sky').
    { text: z.string().describe("Text to convert to speech"), model: z.string().optional().default("tts-kokoro").describe("TTS model"), voice: z.string().optional().default("af_sky").describe("Voice ID (e.g., af_sky, af_bella, am_adam)"), },
  • Direct registration of the venice_text_to_speech tool using server.tool(), including name, description, input schema, and handler function.
    server.tool( "venice_text_to_speech", "Convert text to speech audio using Venice AI", { text: z.string().describe("Text to convert to speech"), model: z.string().optional().default("tts-kokoro").describe("TTS model"), voice: z.string().optional().default("af_sky").describe("Voice ID (e.g., af_sky, af_bella, am_adam)"), }, async ({ text, model, voice }) => { const response = await veniceAPI("/audio/speech", { method: "POST", body: JSON.stringify({ model, input: text, voice }) }); if (!response.ok) { const data = await response.json() as { error?: { message?: string } }; return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; } const arrayBuffer = await response.arrayBuffer(); const base64 = Buffer.from(arrayBuffer).toString("base64"); return { content: [{ type: "text" as const, text: `Audio generated (${Math.round(base64.length / 1024)}KB MP3): data:audio/mp3;base64,${base64.substring(0, 50)}...` }] }; } );
  • src/index.ts:16-16 (registration)
    Invocation of registerInferenceTools which registers the venice_text_to_speech among other inference tools to the MCP server.
    registerInferenceTools(server);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/georgeglarson/venice-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server