Skip to main content
Glama

research

Conduct in-depth research using multiple AI providers, generating summaries, detailed reports, or academic outputs based on specific queries and sources.

Instructions

Conduct comprehensive research with multiple output formats

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNoSpecific model to use
outputFormatNoOutput format for researchdetailed
providerNoAI provider to use (defaults to Azure if configured, otherwise best available)
queryYesResearch query or topic
sourcesNoSpecific sources or contexts to consider

Implementation Reference

  • Core handler function that executes the research tool: selects AI provider, builds specialized system prompt based on output format and sources, calls provider.generateText with research-specific configuration (high reasoning effort, search grounding for Gemini, low temperature), and formats response with metadata.
    async handleResearch(params: z.infer<typeof ResearchSchema>) { // Use provided provider or get the preferred one (Azure if configured) const providerName = params.provider || (await this.providerManager.getPreferredProvider(['openai', 'gemini', 'azure', 'grok'])); const provider = await this.providerManager.getProvider(providerName); // Build research prompts based on output format const formatInstructions = { summary: "Provide a concise summary of key findings and insights.", detailed: "Provide a comprehensive analysis with detailed findings, evidence, and conclusions.", academic: "Present findings in an academic format with clear structure, citations where possible, and scholarly analysis.", }; const systemPrompt = `You are an expert researcher with deep knowledge across multiple domains. Your task is to conduct thorough research, analyze information critically, and present findings clearly. ${params.sources ? `Consider these specific sources or contexts: ${params.sources.join(", ")}` : ""} ${formatInstructions[params.outputFormat]}`; const response = await provider.generateText({ prompt: `Research the following: ${params.query}`, model: params.model, systemPrompt, reasoningEffort: (providerName === "openai" || providerName === "azure" || providerName === "grok") ? "high" : undefined, useSearchGrounding: providerName === "gemini", // Always enable search for research with Gemini temperature: 0.4, // Lower temperature for research accuracy }); return { content: [ { type: "text", text: response.text, }, ], metadata: { provider: providerName, model: response.model, outputFormat: params.outputFormat, usage: response.usage, ...response.metadata, }, }; }
  • src/server.ts:264-272 (registration)
    Registers the 'research' tool with MCP server, providing title, description, input schema (ResearchSchema), and handler that loads AIToolHandlers and calls handleResearch.
    // Register research tool server.registerTool("research", { title: "Research", description: "Conduct comprehensive research with multiple output formats", inputSchema: ResearchSchema.shape, }, async (args) => { const aiHandlers = await getHandlers(); return await aiHandlers.handleResearch(args); });
  • Zod schema defining input parameters for the research tool: provider, query (required), sources, model, outputFormat.
    const ResearchSchema = z.object({ provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider to use (defaults to Azure if configured, otherwise best available)"), query: z.string().describe("Research query or topic"), sources: z.array(z.string()).optional().describe("Specific sources or contexts to consider"), model: z.string().optional().describe("Specific model to use"), outputFormat: z.enum(["summary", "detailed", "academic"]).default("detailed").describe("Output format for research"), });
  • Duplicate Zod schema for research tool inputs, used for typing the handleResearch method.
    // Schema for the research tool const ResearchSchema = z.object({ provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider to use (defaults to Azure if configured, otherwise best available)"), query: z.string().describe("Research query or topic"), sources: z.array(z.string()).optional().describe("Specific sources or contexts to consider"), model: z.string().optional().describe("Specific model to use"), outputFormat: z.enum(["summary", "detailed", "academic"]).default("detailed").describe("Output format for research"), });
  • src/server.ts:551-570 (registration)
    Registers a prompt for the research tool, enabling natural language invocation with formatted user message including query, format, sources, and provider.
    server.registerPrompt("research", { title: "Comprehensive Research", description: "Conduct thorough research on any topic with multiple output formats", argsSchema: { query: z.string().optional(), provider: z.string().optional(), model: z.string().optional(), outputFormat: z.string().optional(), sources: z.string().optional(), }, }, (args) => ({ messages: [{ role: "user", content: { type: "text", text: `Research this topic thoroughly: ${args.query || 'Please specify a research topic or query.'}${args.outputFormat ? ` (format: ${args.outputFormat})` : ''}${args.sources ? `\n\nFocus on these sources: ${args.sources}` : ''}${args.provider ? ` (using ${args.provider} provider)` : ''}` } }] }));

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server