Skip to main content
Glama
199-mcp

Limitless MCP Server

by 199-mcp

limitless_get_full_transcript

Retrieve complete conversation transcripts from Limitless Pendant recordings for specific dates or time ranges, handling large datasets automatically to bypass token limits.

Instructions

Fetches complete transcript data for a date/range with automatic pagination. Handles large datasets by fetching all pages internally. Use this when you need complete transcript data without worrying about token limits.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dateNoSpecific date (YYYY-MM-DD format)
startNoStart datetime (YYYY-MM-DD or YYYY-MM-DD HH:mm:SS)
endNoEnd datetime (YYYY-MM-DD or YYYY-MM-DD HH:mm:SS)
timezoneNoIANA timezone
isStarredNoFilter for starred lifelogs only
formatNoOutput format: summary (key info only), full (all data), transcript_only (just text)summary

Implementation Reference

  • Registration of the 'limitless_get_full_transcript' MCP tool, including description, Zod input schema, and inline async handler function that implements the full logic.
    server.tool("limitless_get_full_transcript", "Fetches complete transcript data for a date/range with automatic pagination. Handles large datasets by fetching all pages internally. Use this when you need complete transcript data without worrying about token limits.", { date: z.string().optional().describe("Specific date (YYYY-MM-DD format)"), start: z.string().optional().describe("Start datetime (YYYY-MM-DD or YYYY-MM-DD HH:mm:SS)"), end: z.string().optional().describe("End datetime (YYYY-MM-DD or YYYY-MM-DD HH:mm:SS)"), timezone: z.string().optional().describe("IANA timezone"), isStarred: z.boolean().optional().describe("Filter for starred lifelogs only"), format: z.enum(["summary", "full", "transcript_only"]).optional().default("summary").describe("Output format: summary (key info only), full (all data), transcript_only (just text)") }, async (args, _extra) => { try { // Validate date format if (args.date && !/^\d{4}-\d{2}-\d{2}$/.test(args.date)) { return { content: [{ type: "text", text: `❌ Date format error: '${args.date}' must be YYYY-MM-DD` }], isError: true }; } const allLifelogs: Lifelog[] = []; let cursor: string | undefined = undefined; let pageCount = 0; // Fetch all pages while (true) { pageCount++; const result = await getLifelogsWithPagination(limitlessApiKey, { date: args.date, start: args.start, end: args.end, timezone: args.timezone, isStarred: args.isStarred, includeMarkdown: true, includeHeadings: true, limit: MAX_API_LIMIT, cursor: cursor, direction: 'asc' }); allLifelogs.push(...result.lifelogs); if (!result.pagination.nextCursor || result.lifelogs.length < MAX_API_LIMIT) { break; } cursor = result.pagination.nextCursor; } if (allLifelogs.length === 0) { return { content: [{ type: "text", text: "No lifelogs found for the specified criteria." }] }; } // Format based on requested output let output: any; let description: string; switch (args.format) { case "transcript_only": // Extract just the transcript text const transcripts = allLifelogs.map(log => ({ id: log.id, date: log.startTime.split('T')[0], transcript: log.markdown || "[No transcript available]" })); output = transcripts; description = `Complete transcripts for ${allLifelogs.length} lifelogs (${pageCount} pages fetched)`; break; case "full": // Return all data output = allLifelogs; description = `Complete data for ${allLifelogs.length} lifelogs (${pageCount} pages fetched)`; break; case "summary": default: // Return summary with key information const summaries = allLifelogs.map(log => ({ id: log.id, title: log.title || "[Untitled]", date: log.startTime.split('T')[0], time: `${log.startTime.split('T')[1]?.split('.')[0]} - ${log.endTime.split('T')[1]?.split('.')[0]}`, duration: calculateDuration(log.startTime, log.endTime), starred: log.isStarred || false, wordCount: log.markdown ? log.markdown.split(/\s+/).length : 0 })); output = { total: allLifelogs.length, pages_fetched: pageCount, date_range: { start: allLifelogs[0].startTime, end: allLifelogs[allLifelogs.length - 1].endTime }, lifelogs: summaries }; description = `Summary of ${allLifelogs.length} lifelogs`; break; } // Check size and handle appropriately const estimatedSize = estimateTokens(JSON.stringify(output)); if (estimatedSize > MAX_RESPONSE_TOKENS) { // Save to a structured format that can be processed in chunks return { content: [{ type: "text", text: `⚠️ Data too large for single response (${Math.ceil(estimatedSize / 1000)}k tokens) Successfully fetched ${allLifelogs.length} lifelogs across ${pageCount} pages. To access this data: 1. Use the regular list tools with cursor pagination 2. Request specific lifelogs by ID 3. Use the search or analytics tools for specific queries Summary: - Total lifelogs: ${allLifelogs.length} - Date range: ${allLifelogs[0].startTime.split('T')[0]} to ${allLifelogs[allLifelogs.length - 1].endTime.split('T')[0]} - Total word count: ${allLifelogs.reduce((sum, log) => sum + (log.markdown ? log.markdown.split(/\s+/).length : 0), 0).toLocaleString()} - Starred items: ${allLifelogs.filter(log => log.isStarred).length} First few IDs for reference: ${allLifelogs.slice(0, 5).map(log => `- ${log.id}: ${log.title || '[Untitled]'}`).join('\n')}` }] }; } return { content: [{ type: "text", text: `${description}:\n\n${JSON.stringify(output, null, 2)}` }] }; } catch (error) { return handleToolApiCall(() => Promise.reject(error)); } } );
  • Core handler logic: Loops to fetch ALL paginated lifelogs using getLifelogsWithPagination, aggregates them, then formats output as summary/full/transcript_only, calculates durations, and handles token limit truncation with warnings.
    async (args, _extra) => { try { // Validate date format if (args.date && !/^\d{4}-\d{2}-\d{2}$/.test(args.date)) { return { content: [{ type: "text", text: `❌ Date format error: '${args.date}' must be YYYY-MM-DD` }], isError: true }; } const allLifelogs: Lifelog[] = []; let cursor: string | undefined = undefined; let pageCount = 0; // Fetch all pages while (true) { pageCount++; const result = await getLifelogsWithPagination(limitlessApiKey, { date: args.date, start: args.start, end: args.end, timezone: args.timezone, isStarred: args.isStarred, includeMarkdown: true, includeHeadings: true, limit: MAX_API_LIMIT, cursor: cursor, direction: 'asc' }); allLifelogs.push(...result.lifelogs); if (!result.pagination.nextCursor || result.lifelogs.length < MAX_API_LIMIT) { break; } cursor = result.pagination.nextCursor; } if (allLifelogs.length === 0) { return { content: [{ type: "text", text: "No lifelogs found for the specified criteria." }] }; } // Format based on requested output let output: any; let description: string; switch (args.format) { case "transcript_only": // Extract just the transcript text const transcripts = allLifelogs.map(log => ({ id: log.id, date: log.startTime.split('T')[0], transcript: log.markdown || "[No transcript available]" })); output = transcripts; description = `Complete transcripts for ${allLifelogs.length} lifelogs (${pageCount} pages fetched)`; break; case "full": // Return all data output = allLifelogs; description = `Complete data for ${allLifelogs.length} lifelogs (${pageCount} pages fetched)`; break; case "summary": default: // Return summary with key information const summaries = allLifelogs.map(log => ({ id: log.id, title: log.title || "[Untitled]", date: log.startTime.split('T')[0], time: `${log.startTime.split('T')[1]?.split('.')[0]} - ${log.endTime.split('T')[1]?.split('.')[0]}`, duration: calculateDuration(log.startTime, log.endTime), starred: log.isStarred || false, wordCount: log.markdown ? log.markdown.split(/\s+/).length : 0 })); output = { total: allLifelogs.length, pages_fetched: pageCount, date_range: { start: allLifelogs[0].startTime, end: allLifelogs[allLifelogs.length - 1].endTime }, lifelogs: summaries }; description = `Summary of ${allLifelogs.length} lifelogs`; break; } // Check size and handle appropriately const estimatedSize = estimateTokens(JSON.stringify(output)); if (estimatedSize > MAX_RESPONSE_TOKENS) { // Save to a structured format that can be processed in chunks return { content: [{ type: "text", text: `⚠️ Data too large for single response (${Math.ceil(estimatedSize / 1000)}k tokens) Successfully fetched ${allLifelogs.length} lifelogs across ${pageCount} pages. To access this data: 1. Use the regular list tools with cursor pagination 2. Request specific lifelogs by ID 3. Use the search or analytics tools for specific queries Summary: - Total lifelogs: ${allLifelogs.length} - Date range: ${allLifelogs[0].startTime.split('T')[0]} to ${allLifelogs[allLifelogs.length - 1].endTime.split('T')[0]} - Total word count: ${allLifelogs.reduce((sum, log) => sum + (log.markdown ? log.markdown.split(/\s+/).length : 0), 0).toLocaleString()} - Starred items: ${allLifelogs.filter(log => log.isStarred).length} First few IDs for reference: ${allLifelogs.slice(0, 5).map(log => `- ${log.id}: ${log.title || '[Untitled]'}`).join('\n')}` }] }; } return { content: [{ type: "text", text: `${description}:\n\n${JSON.stringify(output, null, 2)}` }] }; } catch (error) { return handleToolApiCall(() => Promise.reject(error)); } } );
  • Zod schema defining input parameters for the tool: date/range filters, timezone, starred filter, and output format.
    { date: z.string().optional().describe("Specific date (YYYY-MM-DD format)"), start: z.string().optional().describe("Start datetime (YYYY-MM-DD or YYYY-MM-DD HH:mm:SS)"), end: z.string().optional().describe("End datetime (YYYY-MM-DD or YYYY-MM-DD HH:mm:SS)"), timezone: z.string().optional().describe("IANA timezone"), isStarred: z.boolean().optional().describe("Filter for starred lifelogs only"), format: z.enum(["summary", "full", "transcript_only"]).optional().default("summary").describe("Output format: summary (key info only), full (all data), transcript_only (just text)") },
  • Utility function to format duration from start/end ISO timestamps into human-readable 'Xh Ym' or 'Zm', used in summary output.
    function calculateDuration(start: string, end: string): string { const startTime = new Date(start).getTime(); const endTime = new Date(end).getTime(); const durationMs = endTime - startTime; const minutes = Math.floor(durationMs / 60000); const hours = Math.floor(minutes / 60); const mins = minutes % 60; return hours > 0 ? `${hours}h ${mins}m` : `${minutes}m`; }
  • Key helper function called by the handler to fetch a single page of lifelogs with pagination metadata from Limitless API. Handler loops this to get full dataset.
    export async function getLifelogsWithPagination(apiKey: string, options: LifelogParams = {}): Promise<LifelogsWithPagination> { const defaultTimezone = getDefaultTimezone(); const batchSize = options.limit || 10; // Use requested limit as batch size const params: Record<string, string | number | boolean | undefined> = { limit: batchSize, includeMarkdown: options.includeMarkdown ?? true, includeHeadings: options.includeHeadings ?? true, date: options.date, start: options.start, end: options.end, direction: options.direction ?? 'desc', timezone: options.timezone ?? defaultTimezone, cursor: options.cursor, isStarred: options.isStarred, }; // Clean up undefined values Object.keys(params).forEach(key => { if (params[key] === undefined) delete params[key]; }); const response = await makeApiRequest<LifelogsResponse>(apiKey, "v1/lifelogs", params); const lifelogs = response.data?.lifelogs ?? []; const nextCursor = response.meta?.lifelogs?.nextCursor; const count = response.meta?.lifelogs?.count ?? lifelogs.length; return { lifelogs, pagination: { nextCursor, hasMore: !!nextCursor, count } }; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/199-mcp/mcp-limitless'

If you have feedback or need assistance with the MCP directory API, please join our Discord server