Skip to main content
Glama

search

Query multiple search engines simultaneously using searXNG to aggregate results from Google, Bing, DuckDuckGo, and more. Access comprehensive web information even offline, simplifying efficient research.

Instructions

search the web using searXNG. This will aggregate the results from google, bing, brave, duckduckgo and many others. Use this to find information on the web. Even if you do not have access to the internet, you can still use this tool to search the web.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Implementation Reference

  • The specific handler function that executes the 'search' tool logic by parsing arguments and calling the core search function.
    async def search_tool(
        arguments: dict[str, str],
    ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
        query: str = arguments["query"]
        result = await search(query)
    
        return [types.TextContent(type="text", text=result)]
  • Registers the 'search' tool with the MCP server, providing name, description, and input schema.
    @server.list_tools()
    async def list_tools() -> list[types.Tool]:
        return [
            types.Tool(
                name="search",
                description="search the web using searXNG. This will aggregate the results from google, bing, brave, duckduckgo and many others. Use this to find information on the web. Even if you do not have access to the internet, you can still use this tool to search the web.",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "query": {"type": "string"},
                    },
                    "required": ["query"],
                },
            )
        ]
  • The MCP @server.call_tool() handler that dispatches calls to the specific search_tool when name=='search'.
    @server.call_tool()
    async def get_tool(
        name: str, arguments: dict[str, str] | None
    ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
        if arguments is None:
            arguments = {}
    
        try:
            if name == "search":
                return await search_tool(arguments)
    
        except Exception as e:
            text = f"Tool {name} failed with error: {e}"
            return [types.TextContent(type="text", text=text)]
    
        raise ValueError(f"Unknown tool: {name}")
  • Pydantic model for parsing the searXNG search API response.
    class Response(BaseModel):
        query: str
        number_of_results: int
        results: list[SearchResult]
        # answers: list[str]
        # corrections: list[str]
        infoboxes: list[Infobox]
        # suggestions: list[str]
        # unresponsive_engines: list[str]
  • Core helper function that queries searXNG API, parses response, and formats results as text.
    async def search(query: str, limit: int = 3) -> str:
        client = AsyncClient(base_url=str(getenv("SEARXNG_URL", "http://localhost:8080")))
    
        params: dict[str, str] = {"q": query, "format": "json"}
    
        response = await client.get("/search", params=params)
        response.raise_for_status()
    
        data = Response.model_validate_json(response.text)
    
        text = ""
    
        for index, infobox in enumerate(data.infoboxes):
            text += f"Infobox: {infobox.infobox}\n"
            text += f"ID: {infobox.id}\n"
            text += f"Content: {infobox.content}\n"
            text += "\n"
    
        if len(data.results) == 0:
            text += "No results found\n"
    
        for index, result in enumerate(data.results):
            text += f"Title: {result.title}\n"
            text += f"URL: {result.url}\n"
            text += f"Content: {result.content}\n"
            text += "\n"
    
            if index == limit - 1:
                break
    
        return str(text)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions aggregation from multiple engines and offline capability, which adds some context, but fails to disclose critical traits like rate limits, authentication needs, result format, or pagination behavior. For a web search tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences that are front-loaded: the first states the core purpose, the second explains aggregation, and the third adds usage context. Each sentence earns its place without redundancy, though minor trimming could improve flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (web search with aggregation), no annotations, no output schema, and low schema coverage, the description is partially complete. It covers the purpose and some usage context but misses behavioral details and output expectations. This is adequate as a minimum viable description but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the description must compensate. It implies the 'query' parameter is for web searches but doesn't add specific meaning beyond that (e.g., syntax examples or constraints). Since schema coverage is low, the description provides minimal value, meeting the baseline for adequate but incomplete parameter information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'search the web using searXNG' with the specific verb 'search' and resource 'the web'. It explains that it aggregates results from multiple search engines (Google, Bing, etc.), which distinguishes it from a basic search tool. However, since there are no sibling tools mentioned, the differentiation aspect is not applicable, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context: 'Use this to find information on the web' and notes it works even without internet access, which is helpful guidance. However, it lacks explicit when-not-to-use scenarios or alternatives, as there are no sibling tools to compare against, so it doesn't reach the highest score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SecretiveShell/MCP-searxng'

If you have feedback or need assistance with the MCP directory API, please join our Discord server