Skip to main content
Glama

deep_research_ddgs

Solve fragmented search results by aggregating multiple DuckDuckGo queries, scoring content by relevance, and eliminating duplicates.

Instructions

Perform deep research across multiple search terms using ONLY DuckDuckGo. Aggregates results from multiple DuckDuckGo searches, scores them by relevance, and returns the most relevant content with duplicates removed.

Args: search_terms (List[str]): List of search terms to research. The LLM should provide multiple related search queries for comprehensive coverage. num_results_per_term (int): Number of results to fetch per search term. top_k_per_term (int): Number of top scored results to keep per search term. include_urls (bool): Whether to include URLs in the results.

Returns: Dict containing aggregated research results from all search terms (DuckDuckGo only), with duplicates removed.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
search_termsYes
num_results_per_termNo
top_k_per_termNo
include_urlsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The `deep_research_ddgs` MCP tool function. This is the public-facing handler registered via @mcp.tool() decorator. It delegates to `_deep_research_internal` with `backends=["duckduckgo"]`, performing deep research using only DuckDuckGo.
    def deep_research_ddgs(search_terms: List[str], num_results_per_term:int=10, top_k_per_term:int=3, include_urls:bool=True) -> Dict:
        """
        Perform deep research across multiple search terms using ONLY DuckDuckGo.
        Aggregates results from multiple DuckDuckGo searches, scores them by relevance,
        and returns the most relevant content with duplicates removed.
        
        Args:
            search_terms (List[str]): List of search terms to research. The LLM should provide 
                                      multiple related search queries for comprehensive coverage.
            num_results_per_term (int): Number of results to fetch per search term.
            top_k_per_term (int): Number of top scored results to keep per search term.
            include_urls (bool): Whether to include URLs in the results.
        
        Returns:
            Dict containing aggregated research results from all search terms (DuckDuckGo only),
            with duplicates removed.
        """
        return _deep_research_internal(
            search_terms=search_terms,
            backends=["duckduckgo"],
            num_results_per_term=num_results_per_term,
            top_k_per_term=top_k_per_term,
            include_urls=include_urls
        )
  • The internal handler `_deep_research_internal` that contains the actual logic: iterating over search terms and backends, fetching results via ddgs, scoring with embeddings, deduplicating by URL, and fetching content via thread pool.
    def _deep_research_internal(search_terms:List[str], backends:List[str], num_results_per_term:int=5,top_k_per_term:int=3, include_urls:bool=True)->Dict:
        """
        Internal function to perform deep research across multiple search term with the given backend engine in ddgs.
    
        Args:
            search_terms (List[str]): List of search terms to perform deep research on.
            backends (List[str]): List of search backends to use. 
            num_results (int): Num of results to fetch per search term per engine.
            top_k (int): Number of top score to keep per search term per engine.
            include_urls (bool): whether to include urls in the results.
    
        Returns:
            Dict containing aggregated research results from all search terms and engines.
        """
    
        # lazy load
        from ddgs import DDGS
        from .utils.fetch import fetch_all_content
        from .utils.tools import sort_by_score
    
        ddgs = DDGS()
        all_results = []
        search_summary = {}
        
        # search each term on all specified backends
        for term in search_terms:
            search_summary[term] = {backend: 0 for backend in backends}
            
            for backend in backends:
                try:
                    if backend == "duckduckgo":
                        results = ddgs.text(term, max_results=num_results_per_term)
                    else:
                        results = ddgs.text(term, max_results=num_results_per_term, backend=backend)
                    if results:
                        scored_results = sort_by_score(add_score_to_dict(term, results))
                        top_results = scored_results[0:top_k_per_term]
                        all_results.extend(top_results)
                        search_summary[term][backend] = len(top_results)
                except Exception as e:
                    print(f"Error searching {backend} for '{term}': {e}")
        
        # remove duplicates and keep high scores
        seen_urls = {}
        unique_results = []
        for result in all_results:
            url = result.get('href', '')
            if url:
                # Keep the result with the highest score for duplicate URLs
                if url not in seen_urls or result.get('score', 0) > seen_urls[url].get('score', 0):
                    if url in seen_urls:
                        # Replace lower scored duplicate
                        unique_results.remove(seen_urls[url])
                    seen_urls[url] = result
                    unique_results.append(result)
        
        # fetch content from final list of results
        md_content = fetch_all_content(unique_results, include_urls)
        
        return {
            "search_terms": search_terms,
            "backends": backends,
            "search_summary": search_summary,
            "total_unique_results": len(unique_results),
            "content": md_content
        }
  • The @mcp.tool() decorator on line 265 registers `deep_research_ddgs` as an MCP tool. The FastMCP instance is created on line 5.
    @mcp.tool()
  • Helper function `add_score_to_dict` that computes cosine similarity scores between the query and each result's body using a text embedder.
    def add_score_to_dict(query: str, results: List[Dict]) -> List[Dict]:
        """Add similarity scores to search results."""
        # Import heavy dependencies only when needed (slow import!)
        from importlib.resources import files
        from mediapipe.tasks.python import text
        from .utils.fetch import fetch_embedder, get_path_str
        
        path = get_path_str(files('mcp_local_rag.embedder').joinpath('embedder.tflite'))
        embedder = fetch_embedder(path)
        query_embedding = embedder.embed(query)
    
        for i in results:
            i['score'] = text.TextEmbedder.cosine_similarity(
                            embedder.embed(i['body']).embeddings[0],
                            query_embedding.embeddings[0])
    
        return results
  • Helper function `sort_by_score` used to sort results by their similarity score in descending order.
    def sort_by_score(results: List[Dict]) -> List[Dict]:
        """Sort results by similarity score."""
        return sorted(results, key=lambda x: x['score'], reverse=True)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description discloses the process: multiple searches, scoring, deduplication. It is a read-only operation with no destructive side effects, though it does not explicitly state read-only or auth needs. Description adds sufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is structured with Args and Returns, front-loaded with key purpose. Slightly verbose but each line adds value. No wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, and sibling differentiation needed, description adequately covers all aspects: purpose, parameters, process, output. Returns format is described. Sufficient for agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so description compensates by explaining each parameter: search_terms should be multiple related queries, num_results_per_term, top_k_per_term, include_urls. Adds meaningful guidance beyond bare schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it performs deep research using only DuckDuckGo, aggregates results, scores by relevance, and returns most relevant content with duplicates removed. It distinctly separates from siblings like deep_research_google by specifying the search engine.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly mentions 'using ONLY DuckDuckGo', implying alternatives exist. It advises LLM to provide multiple related search queries, offering usage guidance, but does not explicitly state when to use vs other search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nkapila6/mcp-local-rag'

If you have feedback or need assistance with the MCP directory API, please join our Discord server