Skip to main content
Glama
elevenlabs

ElevenLabs MCP Server

Official
by elevenlabs

get_conversation

Retrieve conversation details and full transcripts from ElevenLabs MCP Server for analyzing completed agent conversations.

Instructions

Gets conversation with transcript. Returns: conversation details and full transcript. Use when: analyzing completed agent conversations.

Args:
    conversation_id: The unique identifier of the conversation to retrieve, you can get the ids from the list_conversations tool.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
conversation_idYes

Implementation Reference

  • The main handler function that implements the 'get_conversation' tool logic. It retrieves conversation details and transcript from the ElevenLabs API, parses the transcript, and formats a comprehensive text response including metadata and analysis.
    def get_conversation(
        conversation_id: str,
    ) -> TextContent:
        """Get conversation details with transcript"""
        try:
            response = client.conversational_ai.conversations.get(conversation_id)
    
            # Parse transcript using utility function
            transcript, _ = parse_conversation_transcript(response.transcript)
    
            response_text = f"""Conversation Details:
    ID: {response.conversation_id}
    Status: {response.status}
    Agent ID: {response.agent_id}
    Message Count: {len(response.transcript)}
    
    Transcript:
    {transcript}"""
    
            if response.metadata:
                metadata = response.metadata
                duration = getattr(
                    metadata,
                    "call_duration_secs",
                    getattr(metadata, "duration_seconds", "N/A"),
                )
                started_at = getattr(
                    metadata, "start_time_unix_secs", getattr(metadata, "started_at", "N/A")
                )
                response_text += (
                    f"\n\nMetadata:\nDuration: {duration} seconds\nStarted: {started_at}"
                )
    
            if response.analysis:
                analysis_summary = getattr(
                    response.analysis, "summary", "Analysis available but no summary"
                )
                response_text += f"\n\nAnalysis:\n{analysis_summary}"
    
            return TextContent(type="text", text=response_text)
    
        except Exception as e:
            make_error(f"Failed to fetch conversation: {str(e)}")
            # satisfies type checker
            return TextContent(type="text", text="")
  • Registers the 'get_conversation' tool with the MCP framework using the @mcp.tool decorator. The description provides usage info and argument documentation, serving as implicit schema.
    @mcp.tool(
        description="""Gets conversation with transcript. Returns: conversation details and full transcript. Use when: analyzing completed agent conversations.
    
        Args:
            conversation_id: The unique identifier of the conversation to retrieve, you can get the ids from the list_conversations tool.
        """
    )
  • Helper utility function used by get_conversation to parse and format the conversation transcript, handling long transcripts by saving them to a temporary file.
    def parse_conversation_transcript(transcript_entries, max_length: int = 50000):
        """
        Parse conversation transcript entries into a formatted string.
        If transcript is too long, save to temporary file and return file path.
    
        Args:
            transcript_entries: List of transcript entries from conversation response
            max_length: Maximum character length before saving to temp file
    
        Returns:
            tuple: (transcript_text_or_path, is_temp_file)
        """
        transcript_lines = []
        for entry in transcript_entries:
            speaker = getattr(entry, "role", "Unknown")
            text = getattr(entry, "message", getattr(entry, "text", ""))
            timestamp = getattr(entry, "timestamp", None)
    
            if timestamp:
                transcript_lines.append(f"[{timestamp}] {speaker}: {text}")
            else:
                transcript_lines.append(f"{speaker}: {text}")
    
        transcript = (
            "\n".join(transcript_lines) if transcript_lines else "No transcript available"
        )
    
        # Check if transcript is too long for LLM context window
        if len(transcript) > max_length:
            # Create temporary file
            temp_file = tempfile.SpooledTemporaryFile(
                mode="w+", max_size=0, encoding="utf-8"
            )
            temp_file.write(transcript)
            temp_file.seek(0)
    
            # Get a persistent temporary file path
            with tempfile.NamedTemporaryFile(
                mode="w", suffix=".txt", delete=False, encoding="utf-8"
            ) as persistent_temp:
                persistent_temp.write(transcript)
                temp_path = persistent_temp.name
    
            return (
                f"Transcript saved to temporary file: {temp_path}\nUse the Read tool to access the full transcript.",
                True,
            )
    
        return transcript, False

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/elevenlabs/elevenlabs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server