research
Conduct comprehensive research using multiple AI providers to generate summaries, detailed reports, or academic formats based on your query.
Instructions
Conduct comprehensive research with multiple output formats
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| provider | No | AI provider to use (defaults to Azure if configured, otherwise best available) | |
| query | Yes | Research query or topic | |
| sources | No | Specific sources or contexts to consider | |
| model | No | Specific model to use | |
| outputFormat | No | Output format for research | detailed |
Implementation Reference
- src/handlers/ai-tools.ts:222-263 (handler)Core handler function that executes the 'research' tool. Selects AI provider, constructs research-specific system prompt based on output format and sources, generates response via provider.generateText, and returns structured content with metadata.async handleResearch(params: z.infer<typeof ResearchSchema>) { // Use provided provider or get the preferred one (Azure if configured) const providerName = params.provider || (await this.providerManager.getPreferredProvider(['openai', 'gemini', 'azure', 'grok'])); const provider = await this.providerManager.getProvider(providerName); // Build research prompts based on output format const formatInstructions = { summary: "Provide a concise summary of key findings and insights.", detailed: "Provide a comprehensive analysis with detailed findings, evidence, and conclusions.", academic: "Present findings in an academic format with clear structure, citations where possible, and scholarly analysis.", }; const systemPrompt = `You are an expert researcher with deep knowledge across multiple domains. Your task is to conduct thorough research, analyze information critically, and present findings clearly. ${params.sources ? `Consider these specific sources or contexts: ${params.sources.join(", ")}` : ""} ${formatInstructions[params.outputFormat]}`; const response = await provider.generateText({ prompt: `Research the following: ${params.query}`, model: params.model, systemPrompt, reasoningEffort: (providerName === "openai" || providerName === "azure" || providerName === "grok") ? "high" : undefined, useSearchGrounding: providerName === "gemini", // Always enable search for research with Gemini temperature: 0.4, // Lower temperature for research accuracy }); return { content: [ { type: "text", text: response.text, }, ], metadata: { provider: providerName, model: response.model, outputFormat: params.outputFormat, usage: response.usage, ...response.metadata, }, }; }
- src/server.ts:264-272 (registration)Registers the 'research' tool with MCP server, defining title, description, input schema, and handler delegation to AIToolHandlers.handleResearch.// Register research tool server.registerTool("research", { title: "Research", description: "Conduct comprehensive research with multiple output formats", inputSchema: ResearchSchema.shape, }, async (args) => { const aiHandlers = await getHandlers(); return await aiHandlers.handleResearch(args); });
- src/server.ts:26-32 (schema)Zod schema defining input parameters for the 'research' tool: provider, query (required), sources, model, outputFormat.const ResearchSchema = z.object({ provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider to use (defaults to Azure if configured, otherwise best available)"), query: z.string().describe("Research query or topic"), sources: z.array(z.string()).optional().describe("Specific sources or contexts to consider"), model: z.string().optional().describe("Specific model to use"), outputFormat: z.enum(["summary", "detailed", "academic"]).default("detailed").describe("Output format for research"), });
- src/handlers/ai-tools.ts:26-32 (schema)Zod schema for 'research' tool inputs, used in handler type inference (identical to server.ts definition).const ResearchSchema = z.object({ provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider to use (defaults to Azure if configured, otherwise best available)"), query: z.string().describe("Research query or topic"), sources: z.array(z.string()).optional().describe("Specific sources or contexts to consider"), model: z.string().optional().describe("Specific model to use"), outputFormat: z.enum(["summary", "detailed", "academic"]).default("detailed").describe("Output format for research"), });
- src/server.ts:551-569 (registration)Registers a prompt template for natural language invocation of the 'research' tool, mapping args to a user message prompt.server.registerPrompt("research", { title: "Comprehensive Research", description: "Conduct thorough research on any topic with multiple output formats", argsSchema: { query: z.string().optional(), provider: z.string().optional(), model: z.string().optional(), outputFormat: z.string().optional(), sources: z.string().optional(), }, }, (args) => ({ messages: [{ role: "user", content: { type: "text", text: `Research this topic thoroughly: ${args.query || 'Please specify a research topic or query.'}${args.outputFormat ? ` (format: ${args.outputFormat})` : ''}${args.sources ? `\n\nFocus on these sources: ${args.sources}` : ''}${args.provider ? ` (using ${args.provider} provider)` : ''}` } }] }));