deep_research_ddgs
Solve fragmented search results by aggregating multiple DuckDuckGo queries, scoring content by relevance, and eliminating duplicates.
Instructions
Perform deep research across multiple search terms using ONLY DuckDuckGo. Aggregates results from multiple DuckDuckGo searches, scores them by relevance, and returns the most relevant content with duplicates removed.
Args: search_terms (List[str]): List of search terms to research. The LLM should provide multiple related search queries for comprehensive coverage. num_results_per_term (int): Number of results to fetch per search term. top_k_per_term (int): Number of top scored results to keep per search term. include_urls (bool): Whether to include URLs in the results.
Returns: Dict containing aggregated research results from all search terms (DuckDuckGo only), with duplicates removed.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| search_terms | Yes | ||
| num_results_per_term | No | ||
| top_k_per_term | No | ||
| include_urls | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/mcp_local_rag/main.py:266-289 (handler)The `deep_research_ddgs` MCP tool function. This is the public-facing handler registered via @mcp.tool() decorator. It delegates to `_deep_research_internal` with `backends=["duckduckgo"]`, performing deep research using only DuckDuckGo.
def deep_research_ddgs(search_terms: List[str], num_results_per_term:int=10, top_k_per_term:int=3, include_urls:bool=True) -> Dict: """ Perform deep research across multiple search terms using ONLY DuckDuckGo. Aggregates results from multiple DuckDuckGo searches, scores them by relevance, and returns the most relevant content with duplicates removed. Args: search_terms (List[str]): List of search terms to research. The LLM should provide multiple related search queries for comprehensive coverage. num_results_per_term (int): Number of results to fetch per search term. top_k_per_term (int): Number of top scored results to keep per search term. include_urls (bool): Whether to include URLs in the results. Returns: Dict containing aggregated research results from all search terms (DuckDuckGo only), with duplicates removed. """ return _deep_research_internal( search_terms=search_terms, backends=["duckduckgo"], num_results_per_term=num_results_per_term, top_k_per_term=top_k_per_term, include_urls=include_urls ) - src/mcp_local_rag/main.py:112-177 (handler)The internal handler `_deep_research_internal` that contains the actual logic: iterating over search terms and backends, fetching results via ddgs, scoring with embeddings, deduplicating by URL, and fetching content via thread pool.
def _deep_research_internal(search_terms:List[str], backends:List[str], num_results_per_term:int=5,top_k_per_term:int=3, include_urls:bool=True)->Dict: """ Internal function to perform deep research across multiple search term with the given backend engine in ddgs. Args: search_terms (List[str]): List of search terms to perform deep research on. backends (List[str]): List of search backends to use. num_results (int): Num of results to fetch per search term per engine. top_k (int): Number of top score to keep per search term per engine. include_urls (bool): whether to include urls in the results. Returns: Dict containing aggregated research results from all search terms and engines. """ # lazy load from ddgs import DDGS from .utils.fetch import fetch_all_content from .utils.tools import sort_by_score ddgs = DDGS() all_results = [] search_summary = {} # search each term on all specified backends for term in search_terms: search_summary[term] = {backend: 0 for backend in backends} for backend in backends: try: if backend == "duckduckgo": results = ddgs.text(term, max_results=num_results_per_term) else: results = ddgs.text(term, max_results=num_results_per_term, backend=backend) if results: scored_results = sort_by_score(add_score_to_dict(term, results)) top_results = scored_results[0:top_k_per_term] all_results.extend(top_results) search_summary[term][backend] = len(top_results) except Exception as e: print(f"Error searching {backend} for '{term}': {e}") # remove duplicates and keep high scores seen_urls = {} unique_results = [] for result in all_results: url = result.get('href', '') if url: # Keep the result with the highest score for duplicate URLs if url not in seen_urls or result.get('score', 0) > seen_urls[url].get('score', 0): if url in seen_urls: # Replace lower scored duplicate unique_results.remove(seen_urls[url]) seen_urls[url] = result unique_results.append(result) # fetch content from final list of results md_content = fetch_all_content(unique_results, include_urls) return { "search_terms": search_terms, "backends": backends, "search_summary": search_summary, "total_unique_results": len(unique_results), "content": md_content } - src/mcp_local_rag/main.py:265-265 (registration)The @mcp.tool() decorator on line 265 registers `deep_research_ddgs` as an MCP tool. The FastMCP instance is created on line 5.
@mcp.tool() - src/mcp_local_rag/main.py:7-23 (helper)Helper function `add_score_to_dict` that computes cosine similarity scores between the query and each result's body using a text embedder.
def add_score_to_dict(query: str, results: List[Dict]) -> List[Dict]: """Add similarity scores to search results.""" # Import heavy dependencies only when needed (slow import!) from importlib.resources import files from mediapipe.tasks.python import text from .utils.fetch import fetch_embedder, get_path_str path = get_path_str(files('mcp_local_rag.embedder').joinpath('embedder.tflite')) embedder = fetch_embedder(path) query_embedding = embedder.embed(query) for i in results: i['score'] = text.TextEmbedder.cosine_similarity( embedder.embed(i['body']).embeddings[0], query_embedding.embeddings[0]) return results - Helper function `sort_by_score` used to sort results by their similarity score in descending order.
def sort_by_score(results: List[Dict]) -> List[Dict]: """Sort results by similarity score.""" return sorted(results, key=lambda x: x['score'], reverse=True)