Skip to main content
Glama
kouui

DuckDuckGo Web Search MCP Server

by kouui

search_and_fetch

Search the web using DuckDuckGo to find and retrieve relevant information, URLs, and summaries for any query.

Instructions

Search the web using DuckDuckGo and return results. Args: query: The search query string limit: Maximum number of results to return (default: 3, maximum 10) Returns: List of dictionaries containing - title - url - snippet - summary markdown (empty if not available)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
limitNo

Implementation Reference

  • main.py:82-121 (handler)
    The handler function that implements the 'search_and_fetch' tool. It validates inputs, searches DuckDuckGo via helper, fetches summaries for results in parallel using asyncio.gather, and returns enriched results with title, url, snippet, and summary.
    async def search_and_fetch(query: str, limit: int = 3): """ Search the web using DuckDuckGo and return results. Args: query: The search query string limit: Maximum number of results to return (default: 3, maximum 10) Returns: List of dictionaries containing - title - url - snippet - summary markdown (empty if not available) """ if not isinstance(query, str) or not query.strip(): raise ValueError("Query must be a non-empty string") if not isinstance(limit, int) or limit < 1: raise ValueError("Limit must be a positive integer") # Cap limit at reasonable maximum limit = min(limit, 10) results = await search_duckduckgo(query, limit) if not results: return [{"message": f"No results found for '{query}'"}] # Create a list of fetch_url coroutines fetch_tasks = [fetch_url(item["url"]) for item in results] # Execute all fetch requests in parallel and wait for results summaries = await asyncio.gather(*fetch_tasks) # Assign summaries to their respective result items for item, summary in zip(results, summaries): item["summary"] = summary return results
  • main.py:15-57 (helper)
    Helper function to fetch raw search results from DuckDuckGo HTML, parsing title, url, and snippet.
    async def search_duckduckgo(query: str, limit: int) -> list: """Fetch search results from DuckDuckGo""" try: # Format query for URL formatted_query = query.replace(" ", "+") url = f"{DUCKDUCKGO_URL}?q={formatted_query}" # Set headers to avoid blocking headers = { "User-Agent": USER_AGENT, "Content-Type": "application/json", } async with httpx.AsyncClient() as client: response = await client.get(url, headers=headers, timeout=30.0) response.raise_for_status() # Parse HTML response soup = BeautifulSoup(response.text, "html.parser") result_elements = soup.select('.result__body') # Extract results up to limit results = [] for result in result_elements[:limit]: title_elem = result.select_one('.result__a') url_elem = result.select_one('.result__url') snippet_elem = result.select_one('.result__snippet') if title_elem and url_elem: result_dict = { "title": title_elem.get_text().strip(), "url": url_elem.get_text().strip(), "snippet": snippet_elem.get_text().strip() if snippet_elem else "" } results.append(result_dict) return results except httpx.TimeoutException: return [{"error": "Request timed out"}] except Exception as e: return [{"error": f"Search failed: {str(e)}"}]
  • main.py:59-80 (helper)
    Helper function to fetch and convert URL content to markdown using Jina AI API, falling back to raw HTML text extraction if timeout.
    async def fetch_url(url: str): jina_timeout = 15.0 raw_html_timeout = 5.0 url = f"https://r.jina.ai/{url}" async with httpx.AsyncClient() as client: try: print(f"fetching result from\n{url}") response = await client.get(url, timeout=jina_timeout) """ using jina api to convert html to markdown """ text = response.text return text except httpx.TimeoutException: try: print("Jina API timed out, fetching raw HTML...") response = await client.get(url, timeout=raw_html_timeout) """ using raw html """ soup = BeautifulSoup(response.text, "html.parser") text = soup.get_text() return text except httpx.TimeoutException: return "Timeout error"
  • main.py:82-82 (registration)
    The @mcp.tool() decorator registers the search_and_fetch function as an MCP tool.
    async def search_and_fetch(query: str, limit: int = 3):
  • main.py:82-96 (schema)
    Function signature with type annotations and docstring defining input parameters and output format serve as the tool schema.
    async def search_and_fetch(query: str, limit: int = 3): """ Search the web using DuckDuckGo and return results. Args: query: The search query string limit: Maximum number of results to return (default: 3, maximum 10) Returns: List of dictionaries containing - title - url - snippet - summary markdown (empty if not available) """
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kouui/web-search-duckduckgo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server