Skip to main content
Glama

retell_create_agent

Create a voice agent by configuring voice, language, LLM engine, and behavior settings for AI phone or chat interactions.

Instructions

Create a new voice agent with specified configuration including voice, LLM engine, and behavior settings.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
voice_idYesThe voice ID to use for the agent (use retell_list_voices to see available voices)
response_engineYesThe LLM engine configuration. Use type: 'retell-llm' with llm_id, or type: 'conversation-flow' with conversation_flow_id
agent_nameNoOptional: Display name for the agent
languageNoLanguage code (e.g., 'en-US', 'es-ES', 'multi' for multilingual)
voice_modelNoText-to-speech model to use
voice_temperatureNoVoice naturalness (0-2, default 1)
voice_speedNoSpeech rate (0.5-2, default 1)
interruption_sensitivityNoHow sensitive to user interruptions (0-1)
enable_backchannelNoEnable conversational acknowledgments like 'uh-huh', 'I see'
end_call_after_silence_msNoMilliseconds of silence before ending call
max_call_duration_msNoMaximum call duration in milliseconds
webhook_urlNoURL for receiving call event webhooks

Implementation Reference

  • Switch case in executeTool function that dispatches the retell_create_agent tool execution by calling the retellRequest helper with the Retell API endpoint for creating an agent.
    case "retell_create_agent": return retellRequest("/create-agent", "POST", args);
  • Input schema definition for the retell_create_agent tool, specifying parameters like voice_id, response_engine, agent_name, etc., used for validation.
    inputSchema: { type: "object", properties: { voice_id: { type: "string", description: "The voice ID to use for the agent (use retell_list_voices to see available voices)" }, response_engine: { type: "object", description: "The LLM engine configuration. Use type: 'retell-llm' with llm_id, or type: 'conversation-flow' with conversation_flow_id", properties: { type: { type: "string", enum: ["retell-llm", "custom-llm", "conversation-flow"], description: "The type of response engine" }, llm_id: { type: "string", description: "The LLM ID (for retell-llm type)" }, conversation_flow_id: { type: "string", description: "The conversation flow ID (for conversation-flow type)" } }, required: ["type"] }, agent_name: { type: "string", description: "Optional: Display name for the agent" }, language: { type: "string", description: "Language code (e.g., 'en-US', 'es-ES', 'multi' for multilingual)" }, voice_model: { type: "string", enum: ["eleven_turbo_v2", "eleven_flash_v2", "eleven_flash_v2_5", "tts-1", "gpt-4o-mini-tts", "azure", "deepgram", "smallest-ai"], description: "Text-to-speech model to use" }, voice_temperature: { type: "number", description: "Voice naturalness (0-2, default 1)" }, voice_speed: { type: "number", description: "Speech rate (0.5-2, default 1)" }, interruption_sensitivity: { type: "number", description: "How sensitive to user interruptions (0-1)" }, enable_backchannel: { type: "boolean", description: "Enable conversational acknowledgments like 'uh-huh', 'I see'" }, end_call_after_silence_ms: { type: "integer", description: "Milliseconds of silence before ending call" }, max_call_duration_ms: { type: "integer", description: "Maximum call duration in milliseconds" }, webhook_url: { type: "string", description: "URL for receiving call event webhooks" } }, required: ["voice_id", "response_engine"] }
  • src/index.ts:430-503 (registration)
    Tool registration object in the tools array, defining name, description, and inputSchema for listing via ListToolsRequestSchema.
    name: "retell_create_agent", description: "Create a new voice agent with specified configuration including voice, LLM engine, and behavior settings.", inputSchema: { type: "object", properties: { voice_id: { type: "string", description: "The voice ID to use for the agent (use retell_list_voices to see available voices)" }, response_engine: { type: "object", description: "The LLM engine configuration. Use type: 'retell-llm' with llm_id, or type: 'conversation-flow' with conversation_flow_id", properties: { type: { type: "string", enum: ["retell-llm", "custom-llm", "conversation-flow"], description: "The type of response engine" }, llm_id: { type: "string", description: "The LLM ID (for retell-llm type)" }, conversation_flow_id: { type: "string", description: "The conversation flow ID (for conversation-flow type)" } }, required: ["type"] }, agent_name: { type: "string", description: "Optional: Display name for the agent" }, language: { type: "string", description: "Language code (e.g., 'en-US', 'es-ES', 'multi' for multilingual)" }, voice_model: { type: "string", enum: ["eleven_turbo_v2", "eleven_flash_v2", "eleven_flash_v2_5", "tts-1", "gpt-4o-mini-tts", "azure", "deepgram", "smallest-ai"], description: "Text-to-speech model to use" }, voice_temperature: { type: "number", description: "Voice naturalness (0-2, default 1)" }, voice_speed: { type: "number", description: "Speech rate (0.5-2, default 1)" }, interruption_sensitivity: { type: "number", description: "How sensitive to user interruptions (0-1)" }, enable_backchannel: { type: "boolean", description: "Enable conversational acknowledgments like 'uh-huh', 'I see'" }, end_call_after_silence_ms: { type: "integer", description: "Milliseconds of silence before ending call" }, max_call_duration_ms: { type: "integer", description: "Maximum call duration in milliseconds" }, webhook_url: { type: "string", description: "URL for receiving call event webhooks" } }, required: ["voice_id", "response_engine"] } },
  • Generic helper function that makes authenticated HTTP requests to the Retell AI API, used by the tool handler to perform the actual agent creation.
    async function retellRequest( endpoint: string, method: string = "GET", body?: Record<string, unknown> ): Promise<unknown> { const apiKey = getApiKey(); const headers: Record<string, string> = { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }; const options: RequestInit = { method, headers, }; if (body && method !== "GET") { options.body = JSON.stringify(body); } const response = await fetch(`${RETELL_API_BASE}${endpoint}`, options); if (!response.ok) { const errorText = await response.text(); throw new Error(`Retell API error (${response.status}): ${errorText}`); } // Handle 204 No Content if (response.status === 204) { return { success: true }; } return response.json(); }
  • Helper function to retrieve the Retell API key from environment variable, used by retellRequest.
    function getApiKey(): string { const apiKey = process.env.RETELL_API_KEY; if (!apiKey) { throw new Error("RETELL_API_KEY environment variable is required"); } return apiKey; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/itsanamune/retellsimp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server