Skip to main content
Glama
199-mcp

Limitless MCP Server

by 199-mcp

limitless_list_recent_lifelogs

Retrieve recent lifelog recordings with pagination support for analysis, summarization, and action item extraction from conversation data.

Instructions

Lists the most recent logs/recordings (sorted newest first). Returns paginated results to avoid token limits. Use cursor to fetch next page. Best for getting raw log data which you can then analyze for summaries, action items, topics, etc.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of recent lifelogs to retrieve (Max: 10 per API constraint). Use cursor for more.
cursorNoPagination cursor from previous response. Use to fetch next page of results.
timezoneNoIANA timezone for date/time parameters (defaults to server's local timezone).
includeMarkdownNoInclude markdown content in the response.
includeHeadingsNoInclude headings content in the response.
isStarredNoFilter for starred lifelogs only.

Implementation Reference

  • Main handler function that validates args, calls getLifelogsWithPagination with direction 'desc' for recent lifelogs, handles truncation and pagination info using createSafeResponse.
    async (args, _extra) => { try { const validation = validateApiConstraints(args); if (!validation.valid) { return { content: [{ type: "text", text: validation.error! }], isError: true }; } const result = await getLifelogsWithPagination(limitlessApiKey, { limit: args.limit || MAX_API_LIMIT, timezone: args.timezone, includeMarkdown: args.includeMarkdown, includeHeadings: args.includeHeadings, direction: 'desc', isStarred: args.isStarred, cursor: args.cursor }); const estimatedSize = estimateTokens(JSON.stringify(result.lifelogs)); if (estimatedSize > MAX_RESPONSE_TOKENS * 0.8 && result.lifelogs.length > 1) { const partialCount = Math.max(1, Math.floor(result.lifelogs.length * 0.5)); const partialLifelogs = result.lifelogs.slice(0, partialCount); return createSafeResponse( partialLifelogs, `Found ${result.lifelogs.length} recent lifelogs, showing first ${partialCount} to avoid token limits`, { nextCursor: result.pagination.nextCursor, hasMore: true, totalFetched: partialCount } ); } return createSafeResponse( result.lifelogs, `Found ${result.lifelogs.length} recent lifelogs`, { nextCursor: result.pagination.nextCursor, hasMore: result.pagination.hasMore, totalFetched: result.lifelogs.length } ); } catch (error) { return handleToolApiCall(() => Promise.reject(error)); } }
  • src/server.ts:549-598 (registration)
    MCP tool registration call using server.tool, including description, input schema reference, and handler callback.
    server.tool( "limitless_list_recent_lifelogs", "Lists the most recent logs/recordings (sorted newest first). Returns paginated results to avoid token limits. Use cursor to fetch next page. Best for getting raw log data which you can then analyze for summaries, action items, topics, etc.", ListRecentArgsSchema, async (args, _extra) => { try { const validation = validateApiConstraints(args); if (!validation.valid) { return { content: [{ type: "text", text: validation.error! }], isError: true }; } const result = await getLifelogsWithPagination(limitlessApiKey, { limit: args.limit || MAX_API_LIMIT, timezone: args.timezone, includeMarkdown: args.includeMarkdown, includeHeadings: args.includeHeadings, direction: 'desc', isStarred: args.isStarred, cursor: args.cursor }); const estimatedSize = estimateTokens(JSON.stringify(result.lifelogs)); if (estimatedSize > MAX_RESPONSE_TOKENS * 0.8 && result.lifelogs.length > 1) { const partialCount = Math.max(1, Math.floor(result.lifelogs.length * 0.5)); const partialLifelogs = result.lifelogs.slice(0, partialCount); return createSafeResponse( partialLifelogs, `Found ${result.lifelogs.length} recent lifelogs, showing first ${partialCount} to avoid token limits`, { nextCursor: result.pagination.nextCursor, hasMore: true, totalFetched: partialCount } ); } return createSafeResponse( result.lifelogs, `Found ${result.lifelogs.length} recent lifelogs`, { nextCursor: result.pagination.nextCursor, hasMore: result.pagination.hasMore, totalFetched: result.lifelogs.length } ); } catch (error) { return handleToolApiCall(() => Promise.reject(error)); } } );
  • Input schema (Zod) for the tool parameters: limit, cursor, timezone, includeMarkdown, includeHeadings, isStarred.
    const ListRecentArgsSchema = { limit: z.number().int().positive().max(MAX_API_LIMIT).optional().default(MAX_API_LIMIT).describe(`Number of recent lifelogs to retrieve (Max: ${MAX_API_LIMIT} per API constraint). Use cursor for more.`), cursor: z.string().optional().describe("Pagination cursor from previous response. Use to fetch next page of results."), timezone: CommonListArgsSchema.timezone, includeMarkdown: CommonListArgsSchema.includeMarkdown, includeHeadings: CommonListArgsSchema.includeHeadings, isStarred: CommonListArgsSchema.isStarred, };
  • Core helper function called by the handler: fetches paginated lifelogs from Limitless API.
    export async function getLifelogsWithPagination(apiKey: string, options: LifelogParams = {}): Promise<LifelogsWithPagination> { const defaultTimezone = getDefaultTimezone(); const batchSize = options.limit || 10; // Use requested limit as batch size const params: Record<string, string | number | boolean | undefined> = { limit: batchSize, includeMarkdown: options.includeMarkdown ?? true, includeHeadings: options.includeHeadings ?? true, date: options.date, start: options.start, end: options.end, direction: options.direction ?? 'desc', timezone: options.timezone ?? defaultTimezone, cursor: options.cursor, isStarred: options.isStarred, }; // Clean up undefined values Object.keys(params).forEach(key => { if (params[key] === undefined) delete params[key]; }); const response = await makeApiRequest<LifelogsResponse>(apiKey, "v1/lifelogs", params); const lifelogs = response.data?.lifelogs ?? []; const nextCursor = response.meta?.lifelogs?.nextCursor; const count = response.meta?.lifelogs?.count ?? lifelogs.length; return { lifelogs, pagination: { nextCursor, hasMore: !!nextCursor, count } }; }
  • Helper function used by handler to format response, handle truncation due to token limits, and add pagination guidance.
    function createSafeResponse(data: any, description: string = "Result", paginationInfo?: { nextCursor?: string; hasMore?: boolean; totalFetched?: number }): CallToolResult { // First check if we need to truncate the data array itself let processedData = data; let wasArrayTruncated = false; if (Array.isArray(data)) { const fullEstimate = estimateTokens(JSON.stringify(data, null, 2)); if (fullEstimate > MAX_RESPONSE_TOKENS * 0.8) { // Calculate how many items we can safely include const sampleItem = data[0] ? JSON.stringify(data[0], null, 2) : "{}"; const itemTokens = estimateTokens(sampleItem); const headerTokens = 2000; // Reserve for description and metadata const maxItems = Math.max(1, Math.floor((MAX_RESPONSE_TOKENS - headerTokens) / itemTokens)); if (data.length > maxItems) { processedData = data.slice(0, maxItems); wasArrayTruncated = true; // Update description to reflect truncation description = `${description} (Showing first ${maxItems} of ${data.length} items due to size limits)`; // Force pagination info if (!paginationInfo) { paginationInfo = { hasMore: true }; } else { paginationInfo.hasMore = true; } } } } const { content, truncated, tokenCount } = truncateResponse(processedData); let resultText = `${description}:\n\n${JSON.stringify(content, null, 2)}`; if (paginationInfo?.nextCursor) { resultText += `\n\n📄 **Pagination Available**: Use cursor="${paginationInfo.nextCursor}" to fetch next page.`; if (paginationInfo.totalFetched) { resultText += ` (Showing ${Array.isArray(processedData) ? processedData.length : 'N/A'} items)`; } } else if (paginationInfo?.hasMore || wasArrayTruncated) { resultText += `\n\n⚠️ **More Data Available**: Use smaller limit or cursor pagination to see additional results.`; } if (truncated || wasArrayTruncated) { const originalTokens = estimateTokens(JSON.stringify(data, null, 2)); resultText += `\n\n⚠️ **Response Truncated**: Reduced from ~${Math.ceil(originalTokens / 1000)}k to ~${Math.ceil(tokenCount / 1000)}k tokens. Use smaller limit, cursor pagination, or more specific queries.`; } return { content: [{ type: "text", text: resultText }] }; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/199-mcp/mcp-limitless'

If you have feedback or need assistance with the MCP directory API, please join our Discord server