Skip to main content
Glama

get_meeting_transcript

Retrieve meeting transcripts with essential metadata including participants, dates, and titles using a recording identifier.

Instructions

Retrieve meeting transcript with essential metadata (id, title, participants, dates).

Example: get_meeting_transcript([recording_id])

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
recording_idYesThe recording identifier

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Core implementation of get_meeting_transcript tool. Fetches meeting metadata and full transcript concurrently from Fathom API, combines into a structured response with essential metadata.
    async def get_meeting_transcript(
        ctx: Context,
        recording_id: int
    ) -> dict:
        """Retrieve meeting transcript with essential metadata.
    
        Args:
            ctx: MCP context for logging
            recording_id: Numeric ID of the recording
    
        Returns:
            dict: Transcript with minimal metadata (id, title, participants, dates)
        """
        try:
            await ctx.info(f"Fetching transcript for recording {recording_id}")
    
            # Fetch meeting metadata and transcript concurrently
            meeting_task = client.get_meeting(recording_id)
            transcript_task = client.get_transcript(recording_id)
    
            meeting, transcript = await asyncio.gather(meeting_task, transcript_task)
            
            # Build transcript object with essential metadata
            result = {
                "recording_id": recording_id,
                "title": meeting.get("title"),
                "participants": meeting.get("participants", []),
                "created_at": meeting.get("created_at"),
                "scheduled_start_time": meeting.get("scheduled_start_time"),
                "scheduled_end_time": meeting.get("scheduled_end_time"),
                "transcript": transcript.get("transcript", [])
            }
    
            await ctx.info("Successfully retrieved meeting transcript")
            return result
    
        except FathomAPIError as e:
            await ctx.error(f"Fathom API error: {e.message}")
            raise e
        except Exception as e:
            await ctx.error(f"Unexpected error fetching meeting transcript: {str(e)}")
            raise e
  • server.py:168-178 (registration)
    Registers the get_meeting_transcript tool with FastMCP using @mcp.tool decorator. Defines input schema via Pydantic Field and delegates execution to the handler in tools/recordings.py.
    @mcp.tool
    async def get_meeting_transcript(
        ctx: Context,
        recording_id: int = Field(..., description="The recording identifier")
    ) -> Dict[str, Any]:
        """Retrieve meeting transcript with essential metadata (id, title, participants, dates).
    
        Example:
            get_meeting_transcript([recording_id])
        """
        return await tools.recordings.get_meeting_transcript(ctx, recording_id)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves a transcript and metadata, but doesn't cover critical aspects like whether this is a read-only operation, potential errors (e.g., invalid ID), rate limits, authentication needs, or what the output looks like (though an output schema exists). For a retrieval tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first in a single sentence, followed by a concise example. There's no wasted text, and it efficiently communicates the tool's intent. However, the example could be slightly more informative (e.g., clarifying it's a required parameter), keeping it from a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieval with one parameter), 100% schema coverage, and the presence of an output schema, the description is minimally adequate. It covers the basic purpose but lacks usage guidelines and behavioral details that would be helpful for an agent. The output schema reduces the need to explain return values, but other gaps (e.g., error handling, sibling differentiation) keep it from being fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with 'recording_id' documented as 'The recording identifier' of type integer. The description adds minimal value beyond this, only reinforcing the parameter in the example without providing additional context like format constraints or where to obtain the ID. With high schema coverage, the baseline is 3, and the description meets but doesn't exceed this.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieve meeting transcript with essential metadata (id, title, participants, dates).' This specifies the verb ('retrieve'), resource ('meeting transcript'), and scope ('with essential metadata'). However, it doesn't explicitly differentiate from sibling tools like 'get_meeting_details' or 'search_meetings' in terms of what makes this tool unique for transcripts versus other meeting data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It includes an example but doesn't mention when to choose 'get_meeting_transcript' over siblings like 'get_meeting_details' (which might provide different metadata) or 'search_meetings' (which might list meetings without transcripts). There's no context on prerequisites, such as needing a valid recording_id from another tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/druellan/Fathom-Simple-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server