Skip to main content
Glama

perform_research

Research architectural decisions by analyzing project files, knowledge graphs, and environment resources, with web search as fallback when confidence is low.

Instructions

Perform research using cascading sources: project files → knowledge graph → environment resources → web search (fallback)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
questionYesThe research question to answer
projectPathNoPath to project directory.
adrDirectoryNoDirectory containing ADR filesdocs/adrs
confidenceThresholdNoMinimum confidence threshold (0-1) before suggesting web search
performWebSearchNoEnable web search recommendations when confidence is low

Implementation Reference

  • Core handler function `performResearch` that executes comprehensive research using cascading sources (project files → knowledge graph → environment → web search), formats structured Markdown response with confidence scores and sources, handles progress reporting, and saves research context for future sessions.
    export async function performResearch(
      args: {
        question: string;
        projectPath?: string;
        adrDirectory?: string;
        confidenceThreshold?: number;
        performWebSearch?: boolean;
      },
      context?: ToolContext
    ): Promise<any> {
      const {
        question,
        projectPath = process.cwd(),
        adrDirectory = 'docs/adrs',
        confidenceThreshold = 0.6,
        performWebSearch = true,
      } = args;
    
      if (!question || question.trim().length === 0) {
        throw new McpAdrError('Research question is required', 'INVALID_INPUT');
      }
    
      try {
        context?.info(`🔍 Starting research: ${question}`);
        context?.report_progress(0, 100);
    
        // Create research orchestrator
        const orchestrator = new ResearchOrchestrator(projectPath, adrDirectory);
        orchestrator.setConfidenceThreshold(confidenceThreshold);
    
        context?.info('📁 Searching project files...');
        context?.report_progress(25, 100);
    
        // Perform research (orchestrator handles: files → knowledge graph → environment → web)
        context?.info('📊 Querying knowledge graph and environment resources...');
        context?.report_progress(50, 100);
    
        const research = await orchestrator.answerResearchQuestion(question);
    
        context?.info('🌐 Analyzing results and preparing response...');
        context?.report_progress(75, 100);
    
        // Format response
        let response = `# Research Results: ${question}
    
    ## Summary
    ${research.answer || 'No conclusive answer found from available sources.'}
    
    ## Confidence Score: ${(research.confidence * 100).toFixed(1)}%
    
    ## Sources Consulted
    `;
    
        // Add source details
        if (research.sources.length === 0) {
          response += '\n*No relevant sources found*\n';
        } else {
          for (const source of research.sources) {
            response += `\n### ${formatSourceName(source.type)}
    - **Confidence**: ${(source.confidence * 100).toFixed(1)}%
    - **Timestamp**: ${source.timestamp}
    `;
    
            // Add source-specific details
            if (source.type === 'project_files') {
              const files = source.data.files || [];
              response += `- **Files Found**: ${files.length}\n`;
    
              if (files.length > 0) {
                response += '\n**Relevant Files**:\n';
                files.slice(0, 10).forEach((file: string) => {
                  const relevance = source.data.relevance?.[file];
                  response += `- \`${file}\`${relevance ? ` (relevance: ${(relevance * 100).toFixed(0)}%)` : ''}\n`;
                });
    
                if (files.length > 10) {
                  response += `\n*... and ${files.length - 10} more files*\n`;
                }
              }
            }
    
            if (source.type === 'knowledge_graph') {
              const nodes = source.data.nodes || [];
              response += `- **Related ADRs**: ${nodes.length}\n`;
            }
    
            if (source.type === 'environment') {
              const capabilities = source.data.capabilities || [];
              response += `- **Available Capabilities**: ${capabilities.join(', ')}\n`;
    
              if (source.data.data?.length > 0) {
                response += '\n**Environment Data**:\n';
                source.data.data.forEach((cap: any) => {
                  response += `- **${cap.capability}**: ${cap.found ? '✅ Data found' : '❌ No data'}\n`;
                });
              }
            }
          }
        }
    
        // Web search recommendation
        if (research.needsWebSearch && performWebSearch) {
          response += `
    
    ## 🌐 Web Search Recommended
    
    Confidence is below threshold (${(confidenceThreshold * 100).toFixed(0)}%).
    Consider performing a web search for additional information:
    
    **Suggested search queries**:
    ${generateSearchQueries(question)
      .map(q => `- "${q}"`)
      .join('\n')}
    `;
        }
    
        // Metadata
        response += `
    
    ## Research Metadata
    - **Duration**: ${research.metadata.duration}ms
    - **Sources Queried**: ${research.metadata.sourcesQueried.join(', ')}
    - **Files Analyzed**: ${research.metadata.filesAnalyzed}
    - **Overall Confidence**: ${(research.confidence * 100).toFixed(1)}%
    
    ## Next Steps
    
    `;
    
        if (research.confidence >= 0.8) {
          response += `✅ High confidence answer. You can proceed with this information.
    `;
        } else if (research.confidence >= 0.6) {
          response += `⚠️ Moderate confidence. Consider validating findings with additional sources.
    `;
        } else {
          response += `❌ Low confidence. Web search or manual research recommended.
    `;
        }
    
        // Recommendations based on sources
        if (research.sources.some(s => s.type === 'project_files')) {
          response += `
    ### Recommended Actions
    1. Review the identified project files for detailed implementation information
    2. Check for any related configuration files or documentation
    3. Consider creating or updating ADRs to document findings
    `;
        }
    
        if (research.sources.some(s => s.type === 'environment')) {
          response += `
    ### Environment Insights
    - Live environment data is available for verification
    - Consider running environment analysis tools for more details
    - Check environment configuration against ADR requirements
    `;
        }
    
        context?.info('✅ Research complete!');
        context?.report_progress(100, 100);
    
        // Save research context for future sessions
        try {
          const contextManager = new ToolContextManager(projectPath);
          await contextManager.initialize();
    
          const contextDoc: ToolContextDocument = {
            metadata: {
              toolName: 'perform_research',
              toolVersion: '2.0.0',
              generated: new Date().toISOString(),
              projectPath,
              projectName: path.basename(projectPath),
              status: research.confidence >= confidenceThreshold ? 'success' : 'partial',
              confidence: research.confidence * 100,
            },
            quickReference: `Research: "${question}" - ${(research.confidence * 100).toFixed(0)}% confidence. Sources: ${research.sources.map(s => formatSourceName(s.type)).join(', ')}`,
            executionSummary: {
              status: `Research completed with ${(research.confidence * 100).toFixed(0)}% confidence`,
              confidence: research.confidence * 100,
              keyFindings: [
                `Question: ${question}`,
                `Confidence: ${(research.confidence * 100).toFixed(1)}%`,
                `Sources consulted: ${research.metadata.sourcesQueried.join(', ')}`,
                `Files analyzed: ${research.metadata.filesAnalyzed}`,
                `Duration: ${research.metadata.duration}ms`,
              ],
            },
            detectedContext: {
              question,
              answer: research.answer,
              confidence: research.confidence,
              sources: research.sources.map(s => ({
                type: s.type,
                confidence: s.confidence,
                timestamp: s.timestamp,
                dataType: s.data ? Object.keys(s.data).join(', ') : 'none',
              })),
              needsWebSearch: research.needsWebSearch,
            },
            keyDecisions: [
              {
                decision: `Research approach: ${research.metadata.sourcesQueried.join(' → ')}`,
                rationale: `Cascading research strategy from local project files to external sources`,
                alternatives: ['Direct web search', 'Manual code review'],
              },
            ],
            learnings: {
              successes:
                research.confidence >= 0.8
                  ? ['High confidence research results obtained', 'Sufficient local context available']
                  : research.confidence >= 0.6
                    ? ['Moderate confidence results', 'Some local context found']
                    : [],
              failures:
                research.confidence < 0.6
                  ? [
                      'Low confidence - insufficient local data',
                      'May need web search or manual research',
                    ]
                  : [],
              recommendations:
                research.confidence >= 0.8
                  ? ['Results can be used with confidence', 'Consider documenting findings in ADR']
                  : research.confidence >= 0.6
                    ? [
                        'Validate findings with additional sources',
                        'Consider cross-referencing with documentation',
                      ]
                    : ['Perform web search for additional context', 'Manual research recommended'],
              environmentSpecific: [],
            },
            relatedDocuments: {
              adrs: [],
              configs: [],
              otherContexts: [],
            },
            rawData: {
              research: {
                answer: research.answer,
                confidence: research.confidence,
                sources: research.sources,
                needsWebSearch: research.needsWebSearch,
                metadata: research.metadata,
              },
            },
          };
    
          await contextManager.saveContext('research', contextDoc);
          context?.info('💾 Research context saved for future reference');
        } catch (contextError) {
          // Don't fail the research if context saving fails
          context?.info(`⚠️ Failed to save research context: ${contextError}`);
        }
    
        return {
          content: [
            {
              type: 'text',
              text: response,
            },
          ],
        };
      } catch (error) {
        throw new McpAdrError(
          `Failed to perform research: ${error instanceof Error ? error.message : String(error)}`,
          'RESEARCH_ERROR'
        );
      }
    }
  • Central tool catalog registration defining metadata, input schema (JSON Schema), token cost estimates, category, and CE-MCP support for the 'perform_research' tool.
    TOOL_CATALOG.set('perform_research', {
      name: 'perform_research',
      shortDescription: 'Perform research on a topic',
      fullDescription:
        'Performs comprehensive research on a given topic using web search and analysis.',
      category: 'research',
      complexity: 'complex',
      tokenCost: { min: 4000, max: 10000 },
      hasCEMCPDirective: true, // Phase 4.2: CE-MCP directive added
      relatedTools: ['incorporate_research', 'generate_research_questions'],
      keywords: ['research', 'search', 'investigate', 'topic'],
      requiresAI: true,
      inputSchema: {
        type: 'object',
        properties: {
          topic: { type: 'string', description: 'Research topic' },
          depth: { type: 'string', enum: ['quick', 'standard', 'deep'] },
          outputFormat: { type: 'string', enum: ['summary', 'detailed', 'structured'] },
        },
        required: ['topic'],
      },
    });
  • Helper function to format source type names for display in research results (e.g., 'project_files' → '📁 Project Files').
    function formatSourceName(sourceType: string): string {
      const names: Record<string, string> = {
        project_files: '📁 Project Files',
        knowledge_graph: '🧠 Knowledge Graph',
        environment: '🔧 Environment Resources',
        web_search: '🌐 Web Search',
      };
    
      return names[sourceType] || sourceType;
    }
  • Helper function to generate suggested web search queries from the research question, including variations and technology-specific enhancements.
    function generateSearchQueries(question: string): string[] {
      const queries: string[] = [question];
    
      // Add variations
      const questionLower = question.toLowerCase();
    
      if (questionLower.includes('what')) {
        queries.push(question.replace(/^what/i, 'how to'));
      }
    
      if (questionLower.includes('how')) {
        queries.push(question.replace(/^how/i, 'best practices for'));
      }
    
      // Add context-specific queries
      if (questionLower.includes('kubernetes') || questionLower.includes('k8s')) {
        queries.push(`${question} kubernetes best practices`);
      }
    
      if (questionLower.includes('docker')) {
        queries.push(`${question} docker production`);
      }
    
      if (questionLower.includes('openshift')) {
        queries.push(`${question} openshift documentation`);
      }
    
      return queries.slice(0, 3); // Limit to top 3
    }
  • CE-MCP (Cost-Efficient MCP) registration and directive creator for 'perform_research' tool, providing a state machine directive instead of direct execution to reduce token costs.
    case 'perform_research':
      return createPerformResearchDirective(args as unknown as CEMCPPerformResearchArgs);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server