Skip to main content
Glama
netixc

SearXNG MCP Server

research_topic

Conduct comprehensive research by automatically searching multiple engines and sources, then analyzing and cross-referencing results to validate information and create detailed briefings.

Instructions

Deep research with multiple searches and source validation.

Use this when:

  • User wants comprehensive research or briefing

  • Need to validate information across multiple sources

  • Looking for in-depth analysis

  • User asks to "research", "investigate", or "give me a briefing"

This tool runs 2-6 searches automatically using different strategies:

  • Searches multiple engines (Google, Bing, DuckDuckGo, Brave, Wikipedia)

  • Searches both general web and news sources

  • Deduplicates results across all searches

  • Returns 15-50 UNIQUE sources depending on depth

Perfect for creating comprehensive briefings with validated information.

Parameters: query* - Research topic depth - Research thoroughness: • "quick" - 2 searches, ~15 unique sources • "standard" - 4 searches, ~30 unique sources (recommended) • "deep" - 6 searches, ~50 unique sources

CRITICAL - After receiving sources, you MUST:

  1. Read and analyze ALL sources provided (titles, URLs, content snippets)

  2. Cross-reference claims across multiple sources

  3. Identify facts confirmed by many sources (high confidence)

  4. Note contradictions or single-source claims (lower confidence)

  5. Synthesize findings into a comprehensive briefing with: • Executive summary of key findings • Main facts/developments (note how many sources confirm each) • Contradictions or uncertainties • Source quality assessment (which engines found what)

  6. DO NOT just list the sources - you must analyze, validate, and synthesize them into actionable intelligence

Returns: Research briefing with analyzed, validated, cross-referenced information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesResearch topic or question
depthNoResearch depthstandard

Implementation Reference

  • Core implementation of the research_topic tool in the SearchTools class. Performs multiple searches based on specified depth, deduplicates results, formats raw source material, and provides instructions for further analysis.
    def research_topic(
        self,
        query: str,
        depth: Literal["quick", "standard", "deep"] = "standard"
    ) -> List[TextContent]:
        """Deep research with multiple searches and deduplication.
    
        Performs multiple searches with different strategies to gather
        comprehensive information from diverse sources. Automatically
        deduplicates results.
    
        Args:
            query: Research topic
            depth: Research depth
                - quick: 2 searches, ~15 unique results
                - standard: 4 searches, ~30 unique results
                - deep: 6 searches, ~50 unique results
    
        Returns:
            Deduplicated and aggregated research results
        """
        self.logger.info(f"Starting {depth} research on: {query}")
    
        all_results = []
        search_strategies = []
    
        # Define search strategies based on depth
        if depth == "quick":
            search_strategies = [
                {"category": "general", "engines": None},
                {"category": "news", "engines": None},
            ]
            max_per_search = 10
        elif depth == "standard":
            search_strategies = [
                {"category": "general", "engines": "google,bing"},
                {"category": "general", "engines": "duckduckgo,brave"},
                {"category": "news", "engines": None},
                {"category": "general", "engines": "wikipedia"},
            ]
            max_per_search = 10
        else:  # deep
            search_strategies = [
                {"category": "general", "engines": "google,bing"},
                {"category": "general", "engines": "duckduckgo,brave"},
                {"category": "news", "engines": "google,bing"},
                {"category": "news", "engines": "duckduckgo"},
                {"category": "general", "engines": "wikipedia"},
                {"category": "general", "engines": None},  # All engines
            ]
            max_per_search = 15
    
        # Execute all searches
        for strategy in search_strategies:
            try:
                results = self._search(
                    query,
                    category=strategy["category"],
                    engines=strategy["engines"]
                )
                all_results.extend(results.get("results", [])[:max_per_search])
            except Exception as e:
                self.logger.warning(f"Search strategy failed: {e}")
                continue
    
        # Deduplicate
        unique_results = self._deduplicate_results(all_results)
    
        # Format output - present as raw material to analyze, not numbered references
        output = f"🔬 RESEARCH DATA for analysis: {query}\n"
        output += f"📊 {len(unique_results)} unique sources gathered from {len(search_strategies)} search strategies\n\n"
        output += f"{'='*80}\n"
        output += f"RAW SOURCE MATERIAL (analyze and synthesize - do NOT list to user):\n"
        output += f"{'='*80}\n\n"
    
        for result in unique_results[:25]:
            output += f"• **{result.get('title', 'No title')}**\n"
            output += f"  URL: {result.get('url', '')}\n"
            if result.get('content'):
                content = result['content'][:100] + "..." if len(result['content']) > 100 else result['content']
                output += f"  Content: {content}\n"
    
            if result.get('publishedDate'):
                output += f"  Date: {result['publishedDate']}\n"
    
            output += "\n"
    
        if not unique_results:
            output += "No results found. Try a different query.\n"
    
        output += f"\n{'='*80}\n"
        output += f"⚠️  YOUR TASK: ANALYZE & SYNTHESIZE (NOT list sources!)\n"
        output += f"{'='*80}\n\n"
        output += f"You have {min(len(unique_results), 25)} sources above as RAW MATERIAL.\n\n"
        output += f"REQUIRED ANALYSIS PROCESS:\n"
        output += f"1. Read all source titles and content snippets above\n"
        output += f"2. Extract key claims and facts from the content\n"
        output += f"3. Cross-reference: What do MULTIPLE sources say? (HIGH confidence)\n"
        output += f"4. What's only in ONE source? (LOW confidence - note as unverified)\n"
        output += f"5. Any contradictions between sources? (flag for user)\n\n"
        output += f"REQUIRED OUTPUT FORMAT:\n"
        output += f"- Executive summary (2-3 sentences)\n"
        output += f"- Key findings with confidence indicators:\n"
        output += f"  ✓ HIGH (5+ sources agree)\n"
        output += f"  ~ MEDIUM (2-4 sources)\n"
        output += f"  ? LOW (single source only)\n"
        output += f"- Contradictions/uncertainties if any\n"
        output += f"- Brief conclusion\n\n"
        output += f"DO NOT output source URLs or numbered lists - synthesize into narrative!\n"
        output += f"{'='*80}\n"
    
        return [TextContent(type="text", text=output)]
  • MCP tool registration for 'research_topic'. Uses @mcp.tool decorator with detailed description reference and defines input parameters with Pydantic validation via Annotated types, delegating execution to SearchTools instance.
    @self.mcp.tool(description=RESEARCH_TOPIC_DESC)
    def research_topic(
        query: Annotated[str, Field(description="Research topic or question")],
        depth: Annotated[Literal["quick", "standard", "deep"], Field(description="Research depth")] = "standard"
    ):
        return self.search_tools.research_topic(query, depth)
  • Detailed description string for the research_topic tool, used in MCP registration. Includes comprehensive usage guidelines, parameter descriptions, depth options, and critical instructions for source analysis and synthesis.
    RESEARCH_TOPIC_DESC = """Deep research with multiple searches and source validation.
    
    Use this when:
    - User wants comprehensive research or briefing
    - Need to validate information across multiple sources
    - Looking for in-depth analysis
    - User asks to "research", "investigate", or "give me a briefing"
    
    This tool runs 2-6 searches automatically using different strategies:
    - Searches multiple engines (Google, Bing, DuckDuckGo, Brave, Wikipedia)
    - Searches both general web and news sources
    - Deduplicates results across all searches
    - Returns 15-50 UNIQUE sources depending on depth
    
    Perfect for creating comprehensive briefings with validated information.
    
    Parameters:
    query* - Research topic
    depth - Research thoroughness:
      • "quick" - 2 searches, ~15 unique sources
      • "standard" - 4 searches, ~30 unique sources (recommended)
      • "deep" - 6 searches, ~50 unique sources
    
    CRITICAL - After receiving sources, you MUST:
    1. Read and analyze ALL sources provided (titles, URLs, content snippets)
    2. Cross-reference claims across multiple sources
    3. Identify facts confirmed by many sources (high confidence)
    4. Note contradictions or single-source claims (lower confidence)
    5. Synthesize findings into a comprehensive briefing with:
       • Executive summary of key findings
       • Main facts/developments (note how many sources confirm each)
       • Contradictions or uncertainties
       • Source quality assessment (which engines found what)
    6. DO NOT just list the sources - you must analyze, validate, and synthesize them into actionable intelligence
    
    Returns: Research briefing with analyzed, validated, cross-referenced information"""
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/netixc/SearxngMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server