web_search
Search the web using DuckDuckGo to find relevant information, websites, and resources with detailed results including titles, URLs, and descriptions.
Instructions
Search the web using DuckDuckGo.
Returns a list of search results with titles, URLs, descriptions, and domains.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query | |
| max_results | No | Maximum number of results to return (1-20) |
Implementation Reference
- mcp_duckduckgo/tools.py:45-113 (handler)Main handler function for the 'web_search' tool. Includes input schema via Pydantic Field annotations and executes the search logic using DuckDuckGo via the search_web helper.async def web_search( query: str = Field(..., description="Search query", max_length=400), max_results: int = Field( 10, description="Maximum number of results to return (1-20)", ge=1, le=20 ), ctx: Context = Field(default_factory=Context), ) -> Dict[str, Any]: """ Search the web using DuckDuckGo. Returns a list of search results with titles, URLs, descriptions, and domains. """ logger.info("Searching for: '%s' (max %d results)", query, max_results) try: # Get HTTP client from context http_client = None close_client = False # Try to get HTTP client from lifespan context if ( hasattr(ctx, "lifespan_context") and ctx.lifespan_context and "http_client" in ctx.lifespan_context ): logger.info("Using HTTP client from lifespan context") http_client = ctx.lifespan_context["http_client"] else: # Create a new HTTP client logger.info("Creating new HTTP client") http_client = httpx.AsyncClient(timeout=10.0) close_client = True try: # Perform the search results = await search_web(query, http_client, max_results) # Convert to dict format search_results = [ { "title": result.title, "url": result.url, "description": result.description, "domain": result.domain, } for result in results ] return { "query": query, "results": search_results, "total_results": len(search_results), "status": "success", } finally: if close_client: await http_client.aclose() except (httpx.RequestError, httpx.HTTPError, ValueError) as e: logger.error("Search failed: %s", e) return { "query": query, "results": [], "total_results": 0, "status": "error", "error": str(e), }
- mcp_duckduckgo/server.py:42-49 (registration)Registers the search tools (including web_search) by calling register_search_tools on the FastMCP server instance during server creation.def create_mcp_server() -> FastMCP: """Create and return a FastMCP server instance with proper tool registration.""" server = FastMCP("DuckDuckGo Search", lifespan=app_lifespan) # Register tools directly with the server instance register_search_tools(server) return server
- mcp_duckduckgo/search.py:254-293 (helper)Core helper function that implements the DuckDuckGo web search logic, querying both the instant API and HTML search page, parsing results, and deduplicating them. Called by the web_search handler.async def search_web( query: str, http_client: httpx.AsyncClient, count: int = 10 ) -> List[SearchResult]: """ Main search function that tries multiple methods. Args: query: Search query string http_client: HTTP client to use for requests count: Maximum number of results to return Returns: List of unique SearchResult objects from both instant answers and HTML search """ logger.info("Searching for: '%s' (max %d results)", query, count) # Try instant answers first instant_results = await search_duckduckgo_instant(query, http_client) logger.info("Instant answers found %d results", len(instant_results)) # Always try HTML search for more comprehensive results html_results = await search_duckduckgo_html(query, http_client, count) logger.info("HTML search found %d results", len(html_results)) # Combine and deduplicate all_results = instant_results + html_results # Remove duplicates based on URL seen_urls = set() unique_results = [] for result in all_results: if result.url and result.url not in seen_urls and result.url.startswith("http"): seen_urls.add(result.url) unique_results.append(result) if len(unique_results) >= count: break logger.info("Returning %d unique valid results", len(unique_results)) return unique_results
- mcp_duckduckgo/search.py:53-59 (schema)Dataclass defining the structure of individual search results used in the output of web_search.@dataclass class SearchResult: title: str url: str description: str domain: str = ""
- mcp_duckduckgo/search.py:61-77 (helper)Utility function to extract the domain from a URL, used in search result processing.def extract_domain(url: str) -> str: """ Extract domain from URL. Args: url: URL string to extract domain from Returns: Lowercase domain name or empty string if parsing fails """ try: parsed = urllib.parse.urlparse(url) return parsed.netloc.lower() except Exception as e: logger.debug("Failed to extract domain from URL %s: %s", url, e) return ""