Skip to main content
Glama
elad12390
by elad12390

web_search

Gather fresh web search results through SearXNG to support research tasks with structured queries and reasoning.

Instructions

Use this first to gather fresh web search results via the local SearXNG instance.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
reasoningYes
categoryNoscience
max_resultsNo

Implementation Reference

  • Handler function for the 'web_search' MCP tool. Performs search using SearxNG instance, formats results, handles errors, and tracks usage.
    @mcp.tool() async def web_search( query: Annotated[str, "Natural-language web query"], reasoning: Annotated[str, "Why you're using this tool (required for analytics)"], category: Annotated[ str, "Optional SearXNG category (general, images, news, it, science, etc.)" ] = DEFAULT_CATEGORY, max_results: Annotated[int, "How many ranked hits to return (1-10)"] = DEFAULT_MAX_RESULTS, ) -> str: """Use this first to gather fresh web search results via the local SearXNG instance.""" start_time = time.time() success = False error_msg = None result = "" try: hits = await searcher.search(query, category=category, max_results=max_results) if not hits: result = f"No results for '{query}' in category '{category}'." else: result = _format_search_hits(hits) success = True except Exception as exc: # noqa: BLE001 error_msg = str(exc) result = f"Search failed: {exc}" finally: # Track usage response_time = (time.time() - start_time) * 1000 # Convert to ms tracker.track_usage( tool_name="web_search", reasoning=reasoning, parameters={ "query": query, "category": category, "max_results": max_results, }, response_time_ms=response_time, success=success, error_message=error_msg, response_size=len(result.encode("utf-8")), ) return result
  • MCP tool registration decorator for web_search.
    @mcp.tool()
  • Input schema defined by Annotated type hints and docstring for the web_search tool.
    async def web_search( query: Annotated[str, "Natural-language web query"], reasoning: Annotated[str, "Why you're using this tool (required for analytics)"], category: Annotated[ str, "Optional SearXNG category (general, images, news, it, science, etc.)" ] = DEFAULT_CATEGORY, max_results: Annotated[int, "How many ranked hits to return (1-10)"] = DEFAULT_MAX_RESULTS, ) -> str:
  • SearxSearcher class providing the core search functionality used by web_search tool, including HTTP requests to SearXNG and result parsing.
    class SearxSearcher: """Minimal async client for the local SearXNG instance.""" def __init__(self, base_url: str = SEARX_BASE_URL, timeout: float = HTTP_TIMEOUT) -> None: self.base_url = base_url self.timeout = timeout self._headers = {"User-Agent": USER_AGENT, "Accept": "application/json"} async def search( self, query: str, *, category: str = DEFAULT_CATEGORY, max_results: int = DEFAULT_MAX_RESULTS, time_range: str | None = None, ) -> list[SearchHit]: """Return up to *max_results* hits for *query* within *category*. Args: query: Search query string category: SearXNG category (general, it, etc.) max_results: Maximum number of results to return time_range: Optional time filter (day, week, month, year) """ limit = max(1, min(max_results, MAX_SEARCH_RESULTS)) params = { "q": query, "categories": category, "format": "json", "pageno": 1, } # Add time range filter if specified if time_range: params["time_range"] = time_range async with httpx.AsyncClient(timeout=self.timeout, headers=self._headers) as client: response = await client.get(self.base_url, params=params) response.raise_for_status() payload = response.json() hits: list[SearchHit] = [] for item in payload.get("results", [])[:limit]: title = ( item.get("title") or item.get("pretty_url") or item.get("url") or "Untitled" ).strip() url = item.get("url") or "" snippet = (item.get("content") or item.get("snippet") or "").strip() snippet = clamp_text(snippet, MAX_SNIPPET_CHARS, suffix="…") if snippet else "" hits.append(SearchHit(title=title, url=url, snippet=snippet)) return hits
  • Helper function to format search results into readable numbered list with titles, URLs, and snippets.
    def _format_search_hits(hits): lines = [] for idx, hit in enumerate(hits, 1): snippet = f"\n{hit.snippet}" if hit.snippet else "" lines.append(f"{idx}. {hit.title} — {hit.url}{snippet}") body = "\n\n".join(lines) return clamp_text(body, MAX_RESPONSE_CHARS)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/elad12390/web-research-assistant'

If you have feedback or need assistance with the MCP directory API, please join our Discord server