Skip to main content
Glama

research_topic

Conduct comprehensive research by automatically searching multiple engines and sources, then cross-referencing results to validate information and identify key findings with confidence levels.

Instructions

Deep research with multiple searches and source validation.

Use this when:

  • User wants comprehensive research or briefing

  • Need to validate information across multiple sources

  • Looking for in-depth analysis

  • User asks to "research", "investigate", or "give me a briefing"

This tool runs 2-6 searches automatically using different strategies:

  • Searches multiple engines (Google, Bing, DuckDuckGo, Brave, Wikipedia)

  • Searches both general web and news sources

  • Deduplicates results across all searches

  • Returns 15-50 UNIQUE sources depending on depth

Perfect for creating comprehensive briefings with validated information.

Parameters: query* - Research topic depth - Research thoroughness: • "quick" - 2 searches, ~15 unique sources • "standard" - 4 searches, ~30 unique sources (recommended) • "deep" - 6 searches, ~50 unique sources

CRITICAL - After receiving sources, you MUST:

  1. Read and analyze ALL sources provided (titles, URLs, content snippets)

  2. Cross-reference claims across multiple sources

  3. Identify facts confirmed by many sources (high confidence)

  4. Note contradictions or single-source claims (lower confidence)

  5. Synthesize findings into a comprehensive briefing with: • Executive summary of key findings • Main facts/developments (note how many sources confirm each) • Contradictions or uncertainties • Source quality assessment (which engines found what)

  6. DO NOT just list the sources - you must analyze, validate, and synthesize them into actionable intelligence

Returns: Research briefing with analyzed, validated, cross-referenced information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
depthNoResearch depthstandard
queryYesResearch topic or question

Implementation Reference

  • Core handler function implementing the research_topic tool logic. Executes multiple targeted searches via SearXNG based on specified depth, deduplicates results across sources, formats comprehensive research data with analysis instructions for the AI model.
    def research_topic( self, query: str, depth: Literal["quick", "standard", "deep"] = "standard" ) -> List[TextContent]: """Deep research with multiple searches and deduplication. Performs multiple searches with different strategies to gather comprehensive information from diverse sources. Automatically deduplicates results. Args: query: Research topic depth: Research depth - quick: 2 searches, ~15 unique results - standard: 4 searches, ~30 unique results - deep: 6 searches, ~50 unique results Returns: Deduplicated and aggregated research results """ self.logger.info(f"Starting {depth} research on: {query}") all_results = [] search_strategies = [] # Define search strategies based on depth if depth == "quick": search_strategies = [ {"category": "general", "engines": None}, {"category": "news", "engines": None}, ] max_per_search = 10 elif depth == "standard": search_strategies = [ {"category": "general", "engines": "google,bing"}, {"category": "general", "engines": "duckduckgo,brave"}, {"category": "news", "engines": None}, {"category": "general", "engines": "wikipedia"}, ] max_per_search = 10 else: # deep search_strategies = [ {"category": "general", "engines": "google,bing"}, {"category": "general", "engines": "duckduckgo,brave"}, {"category": "news", "engines": "google,bing"}, {"category": "news", "engines": "duckduckgo"}, {"category": "general", "engines": "wikipedia"}, {"category": "general", "engines": None}, # All engines ] max_per_search = 15 # Execute all searches for strategy in search_strategies: try: results = self._search( query, category=strategy["category"], engines=strategy["engines"] ) all_results.extend(results.get("results", [])[:max_per_search]) except Exception as e: self.logger.warning(f"Search strategy failed: {e}") continue # Deduplicate unique_results = self._deduplicate_results(all_results) # Format output - present as raw material to analyze, not numbered references output = f"🔬 RESEARCH DATA for analysis: {query}\n" output += f"📊 {len(unique_results)} unique sources gathered from {len(search_strategies)} search strategies\n\n" output += f"{'='*80}\n" output += f"RAW SOURCE MATERIAL (analyze and synthesize - do NOT list to user):\n" output += f"{'='*80}\n\n" for result in unique_results[:25]: output += f"• **{result.get('title', 'No title')}**\n" output += f" URL: {result.get('url', '')}\n" if result.get('content'): content = result['content'][:100] + "..." if len(result['content']) > 100 else result['content'] output += f" Content: {content}\n" if result.get('publishedDate'): output += f" Date: {result['publishedDate']}\n" output += "\n" if not unique_results: output += "No results found. Try a different query.\n" output += f"\n{'='*80}\n" output += f"⚠️ YOUR TASK: ANALYZE & SYNTHESIZE (NOT list sources!)\n" output += f"{'='*80}\n\n" output += f"You have {min(len(unique_results), 25)} sources above as RAW MATERIAL.\n\n" output += f"REQUIRED ANALYSIS PROCESS:\n" output += f"1. Read all source titles and content snippets above\n" output += f"2. Extract key claims and facts from the content\n" output += f"3. Cross-reference: What do MULTIPLE sources say? (HIGH confidence)\n" output += f"4. What's only in ONE source? (LOW confidence - note as unverified)\n" output += f"5. Any contradictions between sources? (flag for user)\n\n" output += f"REQUIRED OUTPUT FORMAT:\n" output += f"- Executive summary (2-3 sentences)\n" output += f"- Key findings with confidence indicators:\n" output += f" ✓ HIGH (5+ sources agree)\n" output += f" ~ MEDIUM (2-4 sources)\n" output += f" ? LOW (single source only)\n" output += f"- Contradictions/uncertainties if any\n" output += f"- Brief conclusion\n\n" output += f"DO NOT output source URLs or numbered lists - synthesize into narrative!\n" output += f"{'='*80}\n" return [TextContent(type="text", text=output)]
  • MCP tool registration for 'research_topic'. Uses FastMCP decorator to register the tool with schema (input parameters: query str, depth Literal) and description reference, delegating execution to SearchTools instance.
    @self.mcp.tool(description=RESEARCH_TOPIC_DESC) def research_topic( query: Annotated[str, Field(description="Research topic or question")], depth: Annotated[Literal["quick", "standard", "deep"], Field(description="Research depth")] = "standard" ): return self.search_tools.research_topic(query, depth)
  • Tool description string defining usage guidelines, parameters, and expected behavior for the research_topic tool, referenced in the MCP registration.
    RESEARCH_TOPIC_DESC = """Deep research with multiple searches and source validation. Use this when: - User wants comprehensive research or briefing - Need to validate information across multiple sources - Looking for in-depth analysis - User asks to "research", "investigate", or "give me a briefing" This tool runs 2-6 searches automatically using different strategies: - Searches multiple engines (Google, Bing, DuckDuckGo, Brave, Wikipedia) - Searches both general web and news sources - Deduplicates results across all searches - Returns 15-50 UNIQUE sources depending on depth Perfect for creating comprehensive briefings with validated information. Parameters: query* - Research topic depth - Research thoroughness: • "quick" - 2 searches, ~15 unique sources • "standard" - 4 searches, ~30 unique sources (recommended) • "deep" - 6 searches, ~50 unique sources CRITICAL - After receiving sources, you MUST: 1. Read and analyze ALL sources provided (titles, URLs, content snippets) 2. Cross-reference claims across multiple sources 3. Identify facts confirmed by many sources (high confidence) 4. Note contradictions or single-source claims (lower confidence) 5. Synthesize findings into a comprehensive briefing with: • Executive summary of key findings • Main facts/developments (note how many sources confirm each) • Contradictions or uncertainties • Source quality assessment (which engines found what) 6. DO NOT just list the sources - you must analyze, validate, and synthesize them into actionable intelligence Returns: Research briefing with analyzed, validated, cross-referenced information"""
  • Helper function used by research_topic to remove duplicate search results based on URL, ensuring unique sources across multiple searches.
    def _deduplicate_results(self, results: List[Dict]) -> List[Dict]: """Remove duplicate results by URL. Args: results: List of search results Returns: Deduplicated list """ seen_urls = set() unique_results = [] for result in results: url = result.get('url', '') if url and url not in seen_urls: seen_urls.add(url) unique_results.append(result) return unique_results

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/netixc/SearxngMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server