Skip to main content
Glama

generate-report

Automates comprehensive report generation by analyzing specified topics, enabling users to obtain structured insights via web research and data synthesis.

Input Schema

NameRequiredDescriptionDefault
sessionIdYes
timeoutNo

Input Schema (JSON Schema)

{ "$schema": "http://json-schema.org/draft-07/schema#", "additionalProperties": false, "properties": { "sessionId": { "type": "string" }, "timeout": { "default": 60000, "type": "number" } }, "required": [ "sessionId" ], "type": "object" }

Implementation Reference

  • src/index.ts:161-221 (registration)
    Full registration of the 'generate-report' tool, including input schema, handler function with timeout and error handling that invokes generateResearchReport.
    server.tool( 'generate-report', { sessionId: z.string(), timeout: z.number().optional().default(60000) }, async ({ sessionId, timeout }) => { // Create a promise that rejects after the timeout const timeoutPromise = new Promise((_, reject) => { setTimeout(() => reject(new Error('Report generation timed out')), timeout); }); try { // Race the report generation against the timeout const report = await Promise.race([ generateResearchReport(sessionId), timeoutPromise ]) as ResearchReport; return { content: [{ type: 'text', text: report.report }] }; } catch (error) { console.error('Error generating research report:', error); // Get the current state, even if there was an error const currentState = getResearchState(sessionId); // If we have a valid state, try to generate a basic report from what we have if (currentState && currentState.findings.length > 0) { return { content: [{ type: 'text', text: `# Research Report (Error Recovery)\n\n` + `**Original Query:** ${currentState.query}\n\n` + `**Note:** This is a partial report generated after an error occurred: ${error instanceof Error ? error.message : String(error)}\n\n` + `## Summary of Findings\n\n` + `The research process collected ${currentState.findings.length} sets of findings ` + `across ${currentState.topics.length} topics but encountered an error during the final report generation.\n\n` + `### Topics Researched\n\n` + currentState.topics.map((topic, index) => `${index + 1}. ${topic}`).join('\n') }] }; } return { content: [{ type: 'text', text: JSON.stringify({ message: `Error generating research report: ${error instanceof Error ? error.message : String(error)}`, error: true }, null, 2) }], isError: true }; } } );
  • Helper function generateResearchReport that fetches the research state for the session and calls the OpenAI generateReport service to produce the final structured report.
    export async function generateResearchReport(sessionId: string): Promise<ResearchReport> { const researchState = researchSessions.get(sessionId); if (!researchState) { throw new Error(`No research session found with ID: ${sessionId}`); } const report = await generateReport(researchState.query, researchState.findings); return { query: researchState.query, findings: researchState.findings, topics: researchState.topics, report }; }
  • Core implementation logic in generateReport: parses findings into sources, optimizes content to fit token limits using truncation and metadata extraction, prompts GPT-4-turbo to synthesize a structured research report with citations and bibliography.
    export async function generateReport( query: string, findings: string[] ): Promise<string> { try { // Extract all sources and their content into a structured format interface SourceContent { url: string; title: string; content: string; sourceNum: number; searchQuery: string; } // Track all sources and their content const allSources: SourceContent[] = []; const sourceUrlMap: Map<string, number> = new Map(); // URL to source number mapping let globalSourceCounter = 0; // Process each finding to extract structured content findings.forEach((finding, findingIndex) => { // Extract search query const searchQueryMatch = finding.match(/# Search Results for: (.*?)(\n|$)/); const searchQuery = searchQueryMatch ? searchQueryMatch[1] : `Finding ${findingIndex + 1}`; // Process each source in the finding let isInContent = false; let contentBuffer: string[] = []; let currentUrl = ''; let currentTitle = ''; let currentSourceNum = 0; // Split the finding into lines for processing finding.split('\n').forEach(line => { // Source header pattern: ## Source [1]: Title const sourceMatch = line.match(/## Source \[(\d+)\]: (.*?)$/); if (sourceMatch) { currentSourceNum = parseInt(sourceMatch[1]); currentTitle = sourceMatch[2]; isInContent = false; // If we were processing a previous source, finalize it if (contentBuffer.length > 0 && currentUrl) { // Avoid duplicating content from the same URL if (!sourceUrlMap.has(currentUrl)) { globalSourceCounter++; sourceUrlMap.set(currentUrl, globalSourceCounter); allSources.push({ url: currentUrl, title: currentTitle, content: contentBuffer.join('\n'), sourceNum: globalSourceCounter, searchQuery }); } contentBuffer = []; currentUrl = ''; } } // URL pattern: URL: https://... else if (line.startsWith('URL: ')) { currentUrl = line.substring(5).trim(); } // Content header pattern: ### Content from Source [1]: else if (line.match(/### Content from Source \[\d+\]:/)) { isInContent = true; contentBuffer = []; } // End of source content (next source starts or end of finding) else if (isInContent && (line.startsWith('## Source') || line.startsWith('# Source URLs'))) { isInContent = false; // Finalize the current source if (contentBuffer.length > 0 && currentUrl) { // Avoid duplicating content from the same URL if (!sourceUrlMap.has(currentUrl)) { globalSourceCounter++; sourceUrlMap.set(currentUrl, globalSourceCounter); allSources.push({ url: currentUrl, title: currentTitle, content: contentBuffer.join('\n'), sourceNum: globalSourceCounter, searchQuery }); } contentBuffer = []; currentUrl = ''; } // No continue or break needed - just let it naturally move to the next line } else if (isInContent) { contentBuffer.push(line); } }); }); console.error(`Extracted ${allSources.length} sources from ${findings.length} findings`); // More aggressive content optimization // 1. Set a much lower character limit for content const MAX_CONTENT_LENGTH = 40000; // Reduced from 60000 to 40000 characters let totalContentLength = 0; // 2. Calculate total content length allSources.forEach(source => { totalContentLength += source.content.length; }); // 3. Group sources by search query const sourcesByQuery = new Map<string, SourceContent[]>(); allSources.forEach(source => { if (!sourcesByQuery.has(source.searchQuery)) { sourcesByQuery.set(source.searchQuery, []); } sourcesByQuery.get(source.searchQuery)?.push(source); }); // 4. If content is too large, trim it intelligently let optimizedContent = ''; if (totalContentLength > MAX_CONTENT_LENGTH) { console.error(`Content exceeds token limit (${totalContentLength} characters), optimizing...`); // 5. Instead of proportional allocation, use a more aggressive summarization approach // Create a structured bibliography with minimal content optimizedContent = '# BIBLIOGRAPHY\n\n'; // First pass: Add only metadata for each source sourcesByQuery.forEach((sources, query) => { optimizedContent += `## Search Query: ${query}\n\n`; sources.forEach(source => { // Just add metadata and URL for each source, no content optimizedContent += `[${source.sourceNum}] "${source.title}"\n`; optimizedContent += `URL: ${source.url}\n\n`; }); }); // Second pass: Add abbreviated content for each source until we reach the limit let currentLength = optimizedContent.length; const remainingLength = MAX_CONTENT_LENGTH - currentLength; // Calculate how many characters we can allocate per source const maxCharsPerSource = Math.floor(remainingLength / allSources.length); // Add additional section for content excerpts optimizedContent += '# CONTENT EXCERPTS\n\n'; // Add abbreviated content for each source allSources.forEach(source => { // Truncate the content to the allocated size const excerpt = source.content.length > maxCharsPerSource ? source.content.substring(0, maxCharsPerSource) + '...' : source.content; optimizedContent += `## [${source.sourceNum}] ${source.title}\n\n`; optimizedContent += `${excerpt}\n\n`; }); } else { // If content is within limits, use the original approach sourcesByQuery.forEach((sources, query) => { optimizedContent += `## Search Query: ${query}\n\n`; sources.forEach(source => { optimizedContent += `### [${source.sourceNum}] ${source.title}\n`; optimizedContent += `URL: ${source.url}\n\n`; optimizedContent += `${source.content.trim()}\n\n`; }); }); } // Now generate the report with the optimized content console.error(`Generating report with optimized content (${optimizedContent.length} characters)`); // More optimized prompt with fewer instructions const response = await openai.chat.completions.create({ model: 'gpt-4-turbo', messages: [ { role: 'system', content: `Generate a concise research report on "${query}" using the provided sources. Format: - Executive Summary (2-3 paragraphs) - Introduction - Main Findings (organized by themes) - Conclusion - Bibliography Cite sources using [X] format. Focus on key insights rather than exhaustive detail.` }, { role: 'user', content: `Research report on "${query}" based on the following: ${optimizedContent}` } ], temperature: 0.5, // Lower temperature for more focused output max_tokens: 4000 }); if (!response.choices[0]?.message?.content) { throw new Error("No response content from OpenAI API"); } return response.choices[0].message.content; } catch (error) { console.error("[OpenAI Service] Error generating report:", error); throw error; } }
  • TypeScript interface defining the structure of the ResearchReport returned by the tool's handler.
    export interface ResearchReport { query: string; findings: string[]; topics: string[]; report: string; }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ameeralns/DeepResearchMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server