Skip to main content
Glama

research

Analyze and enhance research on any topic using advanced AI models, providing detailed insights and context for informed decision-making and knowledge expansion.

Instructions

Performs deep research on a given topic using Perplexity Sonar and enhances the result.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe research query or topic to investigate

Implementation Reference

  • Main handler function that executes the research tool: creates background job, calls Perplexity via helper for initial research, enhances with LLM using structured prompt, saves Markdown report to file.
    export const performResearch: ToolExecutor = async (
      params: Record<string, unknown>,
      config: OpenRouterConfig,
      context?: ToolExecutionContext // Add context parameter
    ): Promise<CallToolResult> => {
      // ---> Step 2.5(RM).2: Inject Dependencies & Get Session ID <---
      const sessionId = context?.sessionId || 'unknown-session';
      if (sessionId === 'unknown-session') {
          logger.warn({ tool: 'performResearch' }, 'Executing tool without a valid sessionId. SSE progress updates will not be sent.');
      }
    
      // We can safely access 'query' because executeTool validated it
      const query = params.query as string;
    
      // ---> Step 2.5(RM).3: Create Job & Return Job ID <---
      const jobId = jobManager.createJob('research', params); // Use original tool name 'research'
      logger.info({ jobId, tool: 'research', sessionId }, 'Starting background job.');
    
      // Return immediately
      const initialResponse = formatBackgroundJobInitiationResponse(
        jobId,
        'Research',
        `Your research request for query "${query.substring(0, 50)}..." has been submitted. You can retrieve the result using the job ID.`
      );
    
      // ---> Step 2.5(RM).4: Wrap Logic in Async Block <---
      setImmediate(async () => {
        const logs: string[] = []; // Keep logs specific to this job execution
        let filePath: string = ''; // Define filePath in outer scope for catch block
    
        // ---> Step 2.5(RM).7: Update Final Result/Error Handling (Try Block Start) <---
        try {
          // ---> Step 2.5(RM).6: Add Progress Updates (Initial) <---
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Starting research process...');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Starting research process...');
          logs.push(`[${new Date().toISOString()}] Starting research for: ${query.substring(0, 50)}...`);
    
          // Ensure directories are initialized before writing
          await initDirectories();
    
        // Generate a filename for storing research (using the potentially configured RESEARCH_DIR)
          const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
          const sanitizedQuery = query.substring(0, 30).toLowerCase().replace(/[^a-z0-9]+/g, '-');
          const filename = `${timestamp}-${sanitizedQuery}-research.md`;
          filePath = path.join(RESEARCH_DIR, filename); // Assign to outer scope variable
    
          // ---> Step 2.5(RM).6: Add Progress Updates (Perplexity Call Start) <---
          logger.info({ jobId }, `Performing initial research query via Perplexity: ${query.substring(0, 50)}...`);
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Performing initial research query via Perplexity...');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Performing initial research query via Perplexity...');
          logs.push(`[${new Date().toISOString()}] Calling Perplexity for initial research.`);
    
          // Use Perplexity model for research via centralized helper
          const researchResult = await performResearchQuery(query, config);
    
          // ---> Step 2.5(RM).6: Add Progress Updates (Perplexity Call End / LLM Call Start) <---
          logger.info({ jobId }, "Research Manager: Initial research complete. Enhancing results using direct LLM call...");
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Initial research complete. Enhancing results via LLM...');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Initial research complete. Enhancing results via LLM...');
          logs.push(`[${new Date().toISOString()}] Perplexity research complete. Calling LLM for enhancement.`);
    
          const enhancementPrompt = `Synthesize and structure the following initial research findings based on the original query.\n\nOriginal Query: ${query}\n\nInitial Research Findings:\n${researchResult}`;
    
        const enhancedResearch = await performFormatAwareLlmCallWithCentralizedConfig(
          enhancementPrompt,
          RESEARCH_SYSTEM_PROMPT, // System prompt guides the structuring
          'research_enhancement', // Define a logical task name for potential mapping
          'markdown', // Explicitly specify markdown format
          undefined, // No schema for markdown
          0.4 // Slightly higher temp for synthesis might be okay
        );
    
        // ---> Step 2.5(RM).6: Add Progress Updates (LLM Call End) <---
        logger.info({ jobId }, "Research Manager: Enhancement completed.");
        jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Processing enhanced research...');
        sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Processing enhanced research...');
        logs.push(`[${new Date().toISOString()}] LLM enhancement complete.`);
    
        // Basic validation
        if (!enhancedResearch || typeof enhancedResearch !== 'string' || !enhancedResearch.trim().startsWith('# Research Report:')) {
          logger.warn({ jobId, markdown: enhancedResearch?.substring(0, 100) }, 'Research enhancement returned empty or potentially invalid Markdown format.');
          logs.push(`[${new Date().toISOString()}] Validation Error: LLM output invalid format.`);
          throw new ToolExecutionError('Research enhancement returned empty or invalid Markdown content.');
        }
    
        // Format the research (already should be formatted by LLM, just add timestamp)
        const formattedResult = `${enhancedResearch}\n\n_Generated: ${new Date().toLocaleString()}_`;
    
        // ---> Step 2.5(RM).6: Add Progress Updates (Saving File) <---
        logger.info({ jobId }, `Saving research to ${filePath}...`);
        jobManager.updateJobStatus(jobId, JobStatus.RUNNING, `Saving research to file...`);
        sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, `Saving research to file...`);
        logs.push(`[${new Date().toISOString()}] Saving research to ${filePath}.`);
    
        // Save the result
        await fs.writeFile(filePath, formattedResult, 'utf8');
        logger.info({ jobId }, `Research result saved to ${filePath}`);
        logs.push(`[${new Date().toISOString()}] Research saved successfully.`);
        sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, `Research saved successfully.`);
    
        // ---> Step 2.5(RM).7: Update Final Result/Error Handling (Set Success Result) <---
        const finalResult: CallToolResult = {
          // Include file path in success message
          content: [{ type: "text", text: `Research completed successfully and saved to: ${filePath}\n\n${formattedResult}` }],
          isError: false
        };
        jobManager.setJobResult(jobId, finalResult);
        // Optional explicit SSE: sseNotifier.sendProgress(sessionId, jobId, JobStatus.COMPLETED, 'Research completed successfully.');
    
        // ---> Step 2.5(RM).7: Update Final Result/Error Handling (Catch Block) <---
        } catch (error) {
          const errorMsg = error instanceof Error ? error.message : String(error);
          logger.error({ err: error, jobId, tool: 'research', query }, `Research Manager Error: ${errorMsg}`);
          logs.push(`[${new Date().toISOString()}] Error: ${errorMsg}`);
    
          let appError: AppError;
          const cause = error instanceof Error ? error : undefined;
          if (error instanceof AppError) {
             appError = error; // Use existing AppError
          } else {
             appError = new ToolExecutionError(`Failed to perform research for query "${query}": ${errorMsg}`, { query, filePath }, cause);
          }
    
          const mcpError = new McpError(ErrorCode.InternalError, appError.message, appError.context);
          const errorResult: CallToolResult = {
            content: [{ type: 'text', text: `Error during background job ${jobId}: ${mcpError.message}\n\nLogs:\n${logs.join('\n')}` }],
            isError: true,
            errorDetails: mcpError
          };
    
          // Store error result in Job Manager
          jobManager.setJobResult(jobId, errorResult);
          // Send final failed status via SSE (optional if jobManager handles it)
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.FAILED, `Job failed: ${mcpError.message}`);
        }
      }); // ---> END OF setImmediate WRAPPER <---
    
      return initialResponse; // Return the initial response with Job ID
    };
  • Input schema definition for the research tool (query: string min 3 chars) and full ToolDefinition.
    const researchInputSchemaShape = {
      query: z.string().min(3, { message: "Query must be at least 3 characters long." }).describe("The research query or topic to investigate")
    };
    
    // Tool definition for the research tool, using the raw shape
    const researchToolDefinition: ToolDefinition = {
      name: "research-manager", // Align with mcp-config.json and hybrid-matcher expectations
      description: "Performs comprehensive research on any technical topic including frameworks, libraries, packages, tools, and best practices using Perplexity Sonar.",
      inputSchema: researchInputSchemaShape, // Use the raw shape here
      executor: performResearch // Reference the adapted function
    };
  • Registration of the 'research-manager' tool with the central tool registry.
    if (!isToolRegistered(researchToolDefinition.name)) {
      registerTool(researchToolDefinition);
      logger.debug(`Tool "${researchToolDefinition.name}" registered successfully`);
    } else {
      logger.debug(`Tool "${researchToolDefinition.name}" already registered, skipping`);
    }
  • System prompt used for LLM to structure and enhance the initial Perplexity research into a comprehensive Markdown report.
    const RESEARCH_SYSTEM_PROMPT = `
    # ROLE & GOAL
    You are an expert AI Research Specialist. Your goal is to synthesize initial research findings and the original user query into a comprehensive, well-structured, and insightful research report in Markdown format.
    
    # CORE TASK
    Process the initial research findings (provided as context) related to the user's original 'query'. Enhance, structure, and synthesize this information into a high-quality research report.
    
    # INPUT HANDLING
    - The user prompt will contain the original 'query' and the initial research findings (likely from Perplexity) under a heading like 'Incorporate this information:'.
    - Your task is *not* to perform new research, but to *refine, structure, and deepen* the provided information based on the original query.
    
    # RESEARCH CONTEXT INTEGRATION (Your Input IS the Context)
    - Treat the provided research findings as your primary source material.
    - Analyze the findings for key themes, data points, conflicting information, and gaps.
    - Synthesize the information logically, adding depth and interpretation where possible. Do not simply reformat the input.
    - If the initial research seems incomplete based on the original query, explicitly state the limitations or areas needing further investigation in the 'Limitations' section.
    
    # OUTPUT FORMAT & STRUCTURE (Strict Markdown)
    - Your entire response **MUST** be valid Markdown.
    - Start **directly** with the main title: '# Research Report: [Topic from Original Query]'
    - Use the following sections with the specified Markdown heading levels. Include all sections, even if brief.
    
      ## 1. Executive Summary
      - Provide a brief (2-4 sentence) overview of the key findings and conclusions based *only* on the provided research content.
    
      ## 2. Key Findings
      - List the most important discoveries or data points from the research as bullet points.
      - Directly synthesize information from the provided research context.
    
      ## 3. Detailed Analysis
      - Elaborate on the key findings.
      - Organize the information logically using subheadings (###).
      - Discuss different facets of the topic, incorporating various points from the research.
      - Compare and contrast different viewpoints or data points if present in the research.
    
      ## 4. Practical Applications / Implications
      - Discuss the real-world relevance or potential uses of the researched information.
      - How can this information be applied? What are the consequences?
    
      ## 5. Limitations and Caveats
      - Acknowledge any limitations mentioned in the research findings.
      - Identify potential gaps or areas where the provided research seems incomplete relative to the original query.
      - Mention any conflicting information found in the research.
    
      ## 6. Conclusion & Recommendations (Optional)
      - Summarize the main takeaways.
      - If appropriate based *only* on the provided research, suggest potential next steps or areas for further investigation.
    
    # QUALITY ATTRIBUTES
    - **Synthesized:** Do not just regurgitate the input; organize, connect, and add analytical value.
    - **Structured:** Strictly adhere to the specified Markdown format and sections.
    - **Accurate:** Faithfully represent the information provided in the research context.
    - **Comprehensive (within context):** Cover the key aspects present in the provided research relative to the query.
    - **Clear & Concise:** Use precise language.
    - **Objective:** Present the information neutrally, clearly separating findings from interpretation.
    
    # CONSTRAINTS (Do NOT Do the Following)
    - **NO Conversational Filler:** Start directly with the '# Research Report: ...' title.
    - **NO New Research:** Do not attempt to access external websites or knowledge beyond the provided research context. Your task is synthesis and structuring.
    - **NO Hallucination:** Do not invent findings or data not present in the input.
    - **NO Process Commentary:** Do not mention Perplexity, Gemini, or the synthesis process itself.
    - **Strict Formatting:** Use \`##\` for main sections and \`###\` for subheadings within the Detailed Analysis. Use bullet points for Key Findings.
    `;
  • Core helper function that performs the actual Perplexity API call for the initial research query.
    export async function performResearchQuery(query: string, config: OpenRouterConfig): Promise<string> {
      const logicalTaskName = 'research_query';
      logger.debug({ query, model: config.perplexityModel }, "Performing Perplexity research query"); // Keep original log for context
    
      // Check for API key first
      if (!config.apiKey) {
        throw new ConfigurationError("OpenRouter API key (OPENROUTER_API_KEY) is not configured.");
      }
    
      // Select the model using the utility function
      const defaultModel = config.perplexityModel || "perplexity/sonar"; // Use configured perplexity model as default
      const modelToUse = selectModelForTask(config, logicalTaskName, defaultModel);
    
      try {
        const response = await axios.post(
          `${config.baseUrl}/chat/completions`,
          {
            model: modelToUse, // Use the dynamically selected model
            messages: [
              { role: "system", content: "You are a sophisticated AI research assistant using Perplexity Sonar Deep Research. Provide comprehensive, accurate, and up-to-date information. Research the user's query thoroughly." },
              { role: "user", content: query }
            ],
            max_tokens: 8000, // Increased from 4000 to handle larger research responses
            temperature: 0.1
          },
          {
            headers: {
              "Content-Type": "application/json",
              "Authorization": `Bearer ${config.apiKey}`,
              "HTTP-Referer": "https://vibe-coder-mcp.local" // Optional
            },
            timeout: 90000 // Increased timeout for potentially deeper research (90s)
          }
        );
    
        if (response.data?.choices?.[0]?.message?.content) {
          logger.debug({ query, modelUsed: modelToUse }, "Research query successful");
          return response.data.choices[0].message.content.trim();
        } else {
          logger.warn({ query, responseData: response.data, modelUsed: modelToUse }, "Received empty or unexpected response structure from research call");
          // Throw specific ParsingError
          throw new ParsingError(
            "Invalid API response structure received from research call",
            { query, responseData: response.data, modelUsed: modelToUse }
          );
        }
      } catch (error) {
        logger.error({ err: error, query, modelUsed: modelToUse }, "Research API call failed");
    
        if (axios.isAxiosError(error)) {
          const axiosError = error as AxiosError;
          const status = axiosError.response?.status;
          const responseData = axiosError.response?.data;
          const apiMessage = `Research API Error: Status ${status || 'N/A'}. ${axiosError.message}`;
          // Throw specific ApiError
          throw new ApiError(
            apiMessage,
            status,
            { query, modelUsed: modelToUse, responseData },
            axiosError // Pass original AxiosError
          );
        } else if (error instanceof AppError) {
            // Re-throw known AppErrors (like ParsingError from above)
            throw error;
        } else if (error instanceof Error) {
          // Wrap other standard errors
           throw new AppError(
             `Research failed: ${error.message}`,
             { query, modelUsed: modelToUse },
             error // Pass original Error
           );
        } else {
          // Handle cases where a non-Error was thrown
          throw new AppError(
            `Unknown error during research.`,
            { query, modelUsed: modelToUse, thrownValue: String(error) }
          );
        }
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'enhances the result' but doesn't explain what this entails—whether it involves summarization, citation, formatting, or other processing. It also omits details like rate limits, authentication needs, or potential side effects, leaving significant gaps for an AI agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded in a single sentence, efficiently stating the core action and method. There's no wasted verbiage, and it directly addresses the tool's function. However, it could be slightly more structured by separating purpose from enhancement details, but it remains clear and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a research tool with no annotations and no output schema, the description is incomplete. It doesn't explain what 'enhances the result' means, the format or depth of output, or any behavioral traits. For a tool that likely produces rich, variable outputs, this lack of detail makes it inadequate for an AI agent to use effectively without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal semantic context beyond the input schema, which has 100% coverage for the single parameter 'query'. It implies the parameter is a research topic but doesn't elaborate on format, scope, or examples. Since schema coverage is high, the baseline is 3, but the description doesn't compensate with additional insights like expected query types or limitations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Performs deep research on a given topic using Perplexity Sonar and enhances the result.' It specifies the verb ('performs deep research'), resource ('topic'), and method ('using Perplexity Sonar'), distinguishing it from sibling tools like 'generate-prd' or 'analyze-dependencies'. However, it doesn't explicitly differentiate from potential similar tools not present in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention specific contexts, prerequisites, or exclusions. For example, it doesn't clarify if this is for technical research, market analysis, or general inquiries, nor does it compare to siblings like 'process-request' or 'generate-task-list' that might overlap in information gathering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/freshtechbro/vibe-coder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server