Skip to main content
Glama
by joerup

deep_researcher_start

Initiate a detailed AI-powered research task to analyze and synthesize complex queries. The tool performs extensive web searches, evaluates information, and generates a comprehensive research report. Use with deep_researcher_check to monitor progress and retrieve findings.

Instructions

Start a comprehensive AI-powered deep research task for complex queries. This tool initiates an intelligent agent that performs extensive web searches, crawls relevant pages, analyzes information, and synthesizes findings into a detailed research report. The agent thinks critically about the research topic and provides thorough, well-sourced answers. Use this for complex research questions that require in-depth analysis rather than simple searches. After starting a research task, IMMEDIATELY use deep_researcher_check with the returned task ID to monitor progress and retrieve results.

Input Schema

NameRequiredDescriptionDefault
instructionsYesComplex research question or detailed instructions for the AI researcher. Be specific about what you want to research and any particular aspects you want covered.
modelNoResearch model: 'exa-research' (faster, 15-45s, good for most queries) or 'exa-research-pro' (more comprehensive, 45s-2min, for complex topics). Default: exa-research

Input Schema (JSON Schema)

{ "$schema": "http://json-schema.org/draft-07/schema#", "additionalProperties": false, "properties": { "instructions": { "description": "Complex research question or detailed instructions for the AI researcher. Be specific about what you want to research and any particular aspects you want covered.", "type": "string" }, "model": { "description": "Research model: 'exa-research' (faster, 15-45s, good for most queries) or 'exa-research-pro' (more comprehensive, 45s-2min, for complex topics). Default: exa-research", "enum": [ "exa-research", "exa-research-pro" ], "type": "string" } }, "required": [ "instructions" ], "type": "object" }

Implementation Reference

  • MCP tool handler for deep_researcher_start: parses arguments using schema and calls ExaClient.startDeepResearch
    case 'deep_researcher_start': { const params = deepResearchStartSchema.parse(args); const result = await client.startDeepResearch(params); return { content: [{ type: "text", text: JSON.stringify(result, null, 2) }] }; }
  • Zod input schema validation for deep_researcher_start tool parameters
    // Deep Research Start Tool Schema const deepResearchStartSchema = z.object({ topic: z.string().describe("Research topic or question"), research_type: z.enum(['comprehensive', 'news', 'academic', 'market']).optional().default('comprehensive').describe("Type of research to perform"), max_results: z.number().optional().default(50).describe("Maximum total results to collect"), time_range: z.enum(['week', 'month', 'quarter', 'year', 'all']).optional().describe("Time range for research") });
  • Core implementation of deep_researcher_start: creates asynchronous task, stores metadata globally, kicks off background research via performDeepResearch
    async startDeepResearch(params: { topic: string; research_type?: 'comprehensive' | 'news' | 'academic' | 'market'; max_results?: number; time_range?: 'week' | 'month' | 'quarter' | 'year' | 'all'; }) { const taskId = `research_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`; // Store task metadata (in production, this would be in a database) global.researchTasks = global.researchTasks || {}; global.researchTasks[taskId] = { status: 'in_progress', started_at: new Date().toISOString(), params: params }; // Simulate async research (in production, this would be a queue job) this.performDeepResearch(taskId, params).catch(error => { global.researchTasks[taskId].status = 'failed'; global.researchTasks[taskId].error = error.message; }); return { task_id: taskId, status: 'started', message: 'Deep research task initiated. Use deep_researcher_check to monitor progress.' }; }
  • Background helper function that performs the actual deep research: generates section-specific queries, executes multiple web searches, collects results, and updates task status
    private async performDeepResearch(taskId: string, params: any) { try { const task = global.researchTasks[taskId]; const results: any = { summary: '', sections: [] }; // Determine time range let startDate: string | undefined; if (params.time_range && params.time_range !== 'all') { const now = new Date(); const ranges = { week: 7, month: 30, quarter: 90, year: 365 }; const days = ranges[params.time_range as keyof typeof ranges]; startDate = new Date(now.getTime() - days * 24 * 60 * 60 * 1000).toISOString().split('T')[0]; } // Perform multiple searches based on research type const searchQueries = this.getResearchQueries(params.topic, params.research_type || 'comprehensive'); for (const [section, query] of Object.entries(searchQueries)) { const searchResults = await this.search({ query: query as string, num_results: Math.floor((params.max_results || 50) / Object.keys(searchQueries).length), type: 'neural', start_published_date: startDate, include_text: true, include_summary: true, include_highlights: true }); results.sections.push({ title: section, query: query, results: searchResults }); } // Generate summary results.summary = `Completed deep research on "${params.topic}" with ${results.sections.length} sections and ${results.sections.reduce((acc: number, s: any) => acc + s.results.length, 0)} total results.`; task.status = 'completed'; task.completed_at = new Date().toISOString(); task.results = results; } catch (error) { throw error; } }
  • src/server.ts:30-37 (registration)
    Registers deep_researcher_start as an enabled tool in the ExaServer constructor
    this.enabledTools = enabledTools || [ 'web_search_exa', 'company_research_exa', 'crawling_exa', 'linkedin_search_exa', 'deep_researcher_start', 'deep_researcher_check' ];

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/joerup/exa-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server