Skip to main content
Glama
199-mcp

Limitless MCP Server

by 199-mcp

limitless_get_raw_transcript

Extract clean, unformatted transcripts from Limitless Pendant recordings for AI processing, preserving technical terminology and specific details exactly as spoken.

Instructions

Extract clean, unformatted transcripts optimized for AI processing. Preserves technical terminology, scientific terms, and specific details exactly as spoken without markdown formatting or summarization.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
lifelog_idNoSpecific lifelog ID to extract transcript from. If not provided, uses time_expression.
time_expressionNoNatural time expression like 'today', 'this meeting', 'past hour' (defaults to 'today').
formatNoOutput format: raw_text (clean text for AI), verbatim (speaker: content), structured (detailed with context), timestamps (with time markers), speakers_only (just spoken content).structured
include_timestampsNoInclude precise timing information.
include_speakersNoInclude speaker identification and names.
include_contextNoInclude surrounding context and technical details.
preserve_technical_termsNoPreserve scientific, medical, and technical terminology exactly as spoken.
timezoneNoIANA timezone for time calculations.

Implementation Reference

  • Primary MCP tool handler for 'limitless_get_raw_transcript'. Fetches lifelog(s) by ID or natural time expression, extracts raw transcripts using TranscriptExtractor, handles single/multiple lifelogs, and returns formatted response with token limit handling.
    server.tool("limitless_get_raw_transcript", "Extract clean, unformatted transcripts optimized for AI processing. Preserves technical terminology, scientific terms, and specific details exactly as spoken without markdown formatting or summarization.", RawTranscriptArgsSchema, async (args, _extra) => { try { let lifelogs: Lifelog[] = []; if (args.lifelog_id) { // Get specific lifelog by ID const lifelog = await getLifelogById(limitlessApiKey, args.lifelog_id, { includeMarkdown: true, includeHeadings: true }); lifelogs = [lifelog]; } else { // Get lifelogs by time expression const timeExpression = args.time_expression || 'today'; const parser = new NaturalTimeParser({ timezone: args.timezone }); const timeRange = parser.parseTimeExpression(timeExpression); // Fetch all logs with pagination let cursor: string | undefined = undefined; while (true) { const result = await getLifelogsWithPagination(limitlessApiKey, { start: timeRange.start, end: timeRange.end, timezone: timeRange.timezone, includeMarkdown: true, includeHeadings: true, limit: MAX_API_LIMIT, direction: 'asc', cursor: cursor }); lifelogs.push(...result.lifelogs); if (!result.pagination.nextCursor || result.lifelogs.length < MAX_API_LIMIT) { break; } cursor = result.pagination.nextCursor; } } if (lifelogs.length === 0) { return { content: [{ type: "text", text: "No lifelogs found for the specified criteria." }] }; } const transcriptOptions: TranscriptOptions = { format: args.format, includeTimestamps: args.include_timestamps, includeSpeakers: args.include_speakers, includeContext: args.include_context, preserveFormatting: args.preserve_technical_terms }; if (lifelogs.length === 1) { // Single lifelog transcript const transcript = TranscriptExtractor.extractRawTranscript(lifelogs[0], transcriptOptions); return createSafeResponse(transcript, `Detailed transcript for ${transcript.title}`); } else { // Multiple lifelogs combined transcript const result = TranscriptExtractor.extractMultipleTranscripts(lifelogs, transcriptOptions); return createSafeResponse(result, `Combined transcript analysis (${lifelogs.length} lifelogs)`); } } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); return { content: [{ type: "text", text: `Error extracting transcript: ${errorMessage}` }], isError: true }; } } );
  • Zod input schema (RawTranscriptArgsSchema) defining parameters for the tool, including lifelog_id, time_expression, format options, and flags for timestamps, speakers, context, and timezone.
    const RawTranscriptArgsSchema = { lifelog_id: z.string().optional().describe("Specific lifelog ID to extract transcript from. If not provided, uses time_expression."), time_expression: z.string().optional().describe("Natural time expression like 'today', 'this meeting', 'past hour' (defaults to 'today')."), format: z.enum(["raw_text", "verbatim", "structured", "timestamps", "speakers_only"]).optional().default("structured").describe("Output format: raw_text (clean text for AI), verbatim (speaker: content), structured (detailed with context), timestamps (with time markers), speakers_only (just spoken content)."), include_timestamps: z.boolean().optional().default(true).describe("Include precise timing information."), include_speakers: z.boolean().optional().default(true).describe("Include speaker identification and names."), include_context: z.boolean().optional().default(true).describe("Include surrounding context and technical details."), preserve_technical_terms: z.boolean().optional().default(true).describe("Preserve scientific, medical, and technical terminology exactly as spoken."), timezone: z.string().optional().describe("IANA timezone for time calculations."), };
  • Core helper method TranscriptExtractor.extractRawTranscript that processes a single Lifelog into a DetailedTranscript, extracting segments, analyzing content (technical terms, numbers, key phrases), and generating raw/formatted outputs.
    static extractRawTranscript(lifelog: Lifelog, options: TranscriptOptions = { format: "structured" }): DetailedTranscript { const { format = "structured", includeTimestamps = true, includeSpeakers = true, includeContext = true, preserveFormatting = false, timeFormat = "absolute", speakerFormat = "names" } = options; if (!lifelog.contents || lifelog.contents.length === 0) { return this.createEmptyTranscript(lifelog); } // Extract all conversation segments with full context const segments = this.extractSegments(lifelog.contents, { includeTimestamps, includeSpeakers, includeContext, timeFormat, speakerFormat }); // Analyze content for technical terms, figures, and key phrases const metadata = this.analyzeContent(segments); // Generate different format outputs const rawText = this.generateRawText(segments, preserveFormatting); const formattedTranscript = this.generateFormattedTranscript(segments, format, options); const totalDuration = new Date(lifelog.endTime).getTime() - new Date(lifelog.startTime).getTime(); return { lifelogId: lifelog.id, title: lifelog.title || "Untitled Conversation", startTime: lifelog.startTime, endTime: lifelog.endTime, totalDuration, segments, metadata, rawText, formattedTranscript }; }
  • Helper method for extracting and combining transcripts from multiple lifelogs, aggregating metadata across all transcripts.
    static extractMultipleTranscripts( lifelogs: Lifelog[], options: TranscriptOptions = { format: "structured" } ): { combinedTranscript: string; individualTranscripts: DetailedTranscript[]; aggregatedMetadata: any; } { const individualTranscripts = lifelogs.map(lifelog => this.extractRawTranscript(lifelog, options) ); const combinedTranscript = individualTranscripts .map(t => t.formattedTranscript) .join("\n\n---\n\n"); // Aggregate metadata across all transcripts const aggregatedMetadata = { totalLifelogs: lifelogs.length, totalDuration: individualTranscripts.reduce((sum, t) => sum + t.totalDuration, 0), totalWordCount: individualTranscripts.reduce((sum, t) => sum + t.metadata.wordCount, 0), uniqueSpeakersAcrossAll: Array.from(new Set( individualTranscripts.flatMap(t => t.metadata.uniqueSpeakers) )), allTechnicalTerms: Array.from(new Set( individualTranscripts.flatMap(t => t.metadata.technicalTermsFound) )), allNumbersAndFigures: Array.from(new Set( individualTranscripts.flatMap(t => t.metadata.numbersAndFigures) )), allKeyPhrases: Array.from(new Set( individualTranscripts.flatMap(t => t.metadata.keyPhrases) )) }; return { combinedTranscript, individualTranscripts, aggregatedMetadata }; } }
  • TypeScript interface TranscriptOptions defining configuration options passed from tool handler to extraction helpers.
    export interface TranscriptOptions { format: "raw_text" | "verbatim" | "structured" | "timestamps" | "speakers_only"; includeTimestamps?: boolean; includeSpeakers?: boolean; includeContext?: boolean; preserveFormatting?: boolean; timeFormat?: "offset" | "absolute" | "duration"; speakerFormat?: "names" | "identifiers" | "both"; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/199-mcp/mcp-limitless'

If you have feedback or need assistance with the MCP directory API, please join our Discord server