Skip to main content
Glama

Research

research

Conduct comprehensive research using multiple AI providers to generate summaries, detailed reports, or academic formats based on your query.

Instructions

Conduct comprehensive research with multiple output formats

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
providerNoAI provider to use (defaults to Azure if configured, otherwise best available)
queryYesResearch query or topic
sourcesNoSpecific sources or contexts to consider
modelNoSpecific model to use
outputFormatNoOutput format for researchdetailed

Implementation Reference

  • Core handler function that executes the 'research' tool. Selects AI provider, constructs research-specific system prompt based on output format and sources, generates response via provider.generateText, and returns structured content with metadata.
    async handleResearch(params: z.infer<typeof ResearchSchema>) {
      // Use provided provider or get the preferred one (Azure if configured)
      const providerName = params.provider || (await this.providerManager.getPreferredProvider(['openai', 'gemini', 'azure', 'grok']));
      const provider = await this.providerManager.getProvider(providerName);
      
      // Build research prompts based on output format
      const formatInstructions = {
        summary: "Provide a concise summary of key findings and insights.",
        detailed: "Provide a comprehensive analysis with detailed findings, evidence, and conclusions.",
        academic: "Present findings in an academic format with clear structure, citations where possible, and scholarly analysis.",
      };
    
      const systemPrompt = `You are an expert researcher with deep knowledge across multiple domains. 
      Your task is to conduct thorough research, analyze information critically, and present findings clearly.
      ${params.sources ? `Consider these specific sources or contexts: ${params.sources.join(", ")}` : ""}
      ${formatInstructions[params.outputFormat]}`;
    
      const response = await provider.generateText({
        prompt: `Research the following: ${params.query}`,
        model: params.model,
        systemPrompt,
        reasoningEffort: (providerName === "openai" || providerName === "azure" || providerName === "grok") ? "high" : undefined,
        useSearchGrounding: providerName === "gemini", // Always enable search for research with Gemini
        temperature: 0.4, // Lower temperature for research accuracy
      });
    
      return {
        content: [
          {
            type: "text",
            text: response.text,
          },
        ],
        metadata: {
          provider: providerName,
          model: response.model,
          outputFormat: params.outputFormat,
          usage: response.usage,
          ...response.metadata,
        },
      };
    }
  • src/server.ts:264-272 (registration)
    Registers the 'research' tool with MCP server, defining title, description, input schema, and handler delegation to AIToolHandlers.handleResearch.
    // Register research tool
    server.registerTool("research", {
      title: "Research",
      description: "Conduct comprehensive research with multiple output formats",
      inputSchema: ResearchSchema.shape,
    }, async (args) => {
      const aiHandlers = await getHandlers();
      return await aiHandlers.handleResearch(args);
    });
  • Zod schema defining input parameters for the 'research' tool: provider, query (required), sources, model, outputFormat.
    const ResearchSchema = z.object({
      provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider to use (defaults to Azure if configured, otherwise best available)"),
      query: z.string().describe("Research query or topic"),
      sources: z.array(z.string()).optional().describe("Specific sources or contexts to consider"),
      model: z.string().optional().describe("Specific model to use"),
      outputFormat: z.enum(["summary", "detailed", "academic"]).default("detailed").describe("Output format for research"),
    });
  • Zod schema for 'research' tool inputs, used in handler type inference (identical to server.ts definition).
    const ResearchSchema = z.object({
      provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider to use (defaults to Azure if configured, otherwise best available)"),
      query: z.string().describe("Research query or topic"),
      sources: z.array(z.string()).optional().describe("Specific sources or contexts to consider"),
      model: z.string().optional().describe("Specific model to use"),
      outputFormat: z.enum(["summary", "detailed", "academic"]).default("detailed").describe("Output format for research"),
    });
  • src/server.ts:551-569 (registration)
    Registers a prompt template for natural language invocation of the 'research' tool, mapping args to a user message prompt.
    server.registerPrompt("research", {
      title: "Comprehensive Research",
      description: "Conduct thorough research on any topic with multiple output formats",
      argsSchema: {
        query: z.string().optional(),
        provider: z.string().optional(),
        model: z.string().optional(),
        outputFormat: z.string().optional(),
        sources: z.string().optional(),
      },
    }, (args) => ({
      messages: [{
        role: "user",
        content: {
          type: "text",
          text: `Research this topic thoroughly: ${args.query || 'Please specify a research topic or query.'}${args.outputFormat ? ` (format: ${args.outputFormat})` : ''}${args.sources ? `\n\nFocus on these sources: ${args.sources}` : ''}${args.provider ? ` (using ${args.provider} provider)` : ''}`
        }
      }]
    }));
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'comprehensive research' and 'multiple output formats', but does not explain what 'comprehensive' entails (e.g., depth, sources, time), how outputs differ, or any operational traits like rate limits, authentication needs, or potential side effects. This leaves significant gaps for a tool with 5 parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that is front-loaded with the core action ('conduct comprehensive research'). It avoids unnecessary words, though it could be more structured by explicitly listing key capabilities. Every part earns its place, making it concise but slightly under-specified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a research tool with 5 parameters, no annotations, and no output schema, the description is incomplete. It lacks details on what 'research' involves, how results are returned, error handling, or behavioral constraints. This leaves the agent with insufficient context to use the tool effectively beyond basic parameter input.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters well. The description adds no specific meaning beyond the schema, such as explaining how 'sources' interact with 'query' or what 'academic' output entails. With high schema coverage, the baseline score of 3 is appropriate, as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'conduct[s] comprehensive research with multiple output formats', which provides a general purpose (research) and mentions output formats. However, it lacks specificity about what 'research' entails (e.g., web search, document analysis, data synthesis) and does not clearly distinguish it from sibling tools like 'investigate' or 'search-vectors', making it somewhat vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives. The description does not mention any context, prerequisites, or exclusions, and with sibling tools like 'investigate' and 'search-vectors' present, it fails to differentiate usage scenarios, offering minimal direction to the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server