Skip to main content
Glama

research

Investigate complex queries using multi-agent parallel processing with web search to synthesize comprehensive research answers.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe research query or question to investigate
maxSearchResultsNoMaximum web search results per agent

Implementation Reference

  • src/index.ts:42-84 (registration)
    Registration of the "research" MCP tool, including schema and handler function that orchestrates multi-agent research.
    server.tool( "research", { query: z.string().describe("The research query or question to investigate"), maxSearchResults: z .number() .min(1) .max(20) .default(5) .describe("Maximum web search results per agent"), }, async ({ query, maxSearchResults }) => { try { const result = await orchestrate(query, { agents: agentSpecs, temperature: 0.7, maxSearchResults: maxSearchResults ?? 5, maxIterations: 10, }); return { content: [ { type: "text" as const, text: result, }, ], }; } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); return { content: [ { type: "text" as const, text: `Research failed: ${errorMessage}`, }, ], isError: true, }; } } );
  • Core handler logic for the research tool: orchestrates multiple LLM agents, generates sub-questions, performs web searches, and synthesizes final response.
    export async function orchestrate( query: string, config: OrchestratorConfig ): Promise<string> { const { agents, temperature, maxSearchResults } = config; if (agents.length === 0) { throw new Error("At least one agent must be specified"); } // Single agent mode - just run directly if (agents.length === 1) { const result = await runAgent( 0, agents[0], query, temperature, maxSearchResults ); if (result.status === "error") { throw new Error(result.error); } return result.response; } // Multi-agent mode // Generate questions using the first agent const questions = await generateQuestions( query, agents.length, agents[0], temperature ); // Run all agents in parallel const agentPromises = agents.map((agentSpec, i) => runAgent(i, agentSpec, questions[i], temperature, maxSearchResults) ); const results = await Promise.all(agentPromises); // Synthesize responses using the first agent const finalResponse = await synthesizeResponses(results, agents[0], temperature); return finalResponse; }
  • Helper function that runs a single agent: performs web search, calls LLM with system prompt and search results.
    async function runAgent( agentId: number, agentSpec: string, question: string, temperature: number, maxSearchResults: number ): Promise<AgentResult> { const { provider, model } = parseAgentSpec(agentSpec); try { // Perform web search const searchResults = await searchWeb(question, maxSearchResults); const formattedResults = formatSearchResults(searchResults); // Build messages const messages: Message[] = [ { role: "system", content: SYSTEM_PROMPT }, { role: "user", content: `Question: ${question}\n\nSearch Results:\n${formattedResults}\n\nPlease provide a comprehensive answer based on the search results above.`, }, ]; // Call LLM const response = await callLLM(provider, model, messages, temperature); return { agentId, provider, model, question, response: response.content, status: "success", }; } catch (error) { return { agentId, provider, model, question, response: "", status: "error", error: error instanceof Error ? error.message : String(error), }; } }
  • Helper to generate diverse sub-questions for multi-agent research using an LLM.
    async function generateQuestions( query: string, numAgents: number, agentSpec: string, temperature: number ): Promise<string[]> { const { provider, model } = parseAgentSpec(agentSpec); const prompt = QUESTION_GENERATION_PROMPT.replace("{query}", query).replace( "{numAgents}", numAgents.toString() ); try { const response = await callLLM( provider, model, [{ role: "user", content: prompt }], temperature ); // Parse JSON response const jsonMatch = response.content.match(/\[[\s\S]*\]/); if (jsonMatch) { const questions = JSON.parse(jsonMatch[0]) as string[]; if (questions.length === numAgents) { return questions; } } } catch (error) { console.error("Question generation failed:", error); } // Fallback questions return [ `Research comprehensive information about: ${query}`, `Analyze and provide insights about: ${query}`, `Find alternative perspectives on: ${query}`, `Verify and cross-check facts about: ${query}`, ].slice(0, numAgents); }
  • Input schema for the research tool using Zod validation.
    { query: z.string().describe("The research query or question to investigate"), maxSearchResults: z .number() .min(1) .max(20) .default(5) .describe("Maximum web search results per agent"), },

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Infatoshi/thinkingcap'

If you have feedback or need assistance with the MCP directory API, please join our Discord server