Skip to main content
Glama

tool_search_web

Search the web using DuckDuckGo to find relevant information, articles, and resources for development tasks. Returns structured results with titles, URLs, and snippets for efficient research.

Instructions

Search the web using DuckDuckGo.

Args: query: Search query string. limit: Maximum results (1-20, default 5).

Returns: List of results with title, url, snippet.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The tool `tool_search_web` is defined and registered here using the `@mcp.tool()` decorator. It delegates to the `search_web` helper function.
    @mcp.tool()
    async def tool_search_web(query: str, limit: int = 5) -> list[dict]:
        """Search the web using DuckDuckGo.
    
        Args:
            query: Search query string.
            limit: Maximum results (1-20, default 5).
    
        Returns:
            List of results with title, url, snippet.
        """
        return await search_web(query, limit)
  • The implementation of the web search logic, using `DDGAdapter` to perform the actual search.
    async def search_web(
        query: str,
        limit: int = 5,
        *,
        region: str | None = None,
        safe_search: bool = True,
    ) -> list[dict]:
        """Search the web using DuckDuckGo.
    
        Args:
            query: Search query string. Supports operators like:
                - site:domain.com to search specific domain
                - "exact phrase" for exact matches
                - -word to exclude terms
            limit: Maximum number of results (1-20).
            region: Optional region code (e.g., 'us-en', 'uk-en').
            safe_search: Enable safe search filtering.
    
        Returns:
            List of search results with title, url, and snippet.
    
        Raises:
            SearchError: If search fails or query is invalid.
    
        Example:
            >>> results = await search_web("Python asyncio tutorial", limit=5)
            >>> results = await search_web('site:docs.python.org async', limit=3)
        """
        if not query or not query.strip():
            raise SearchError(query, "Query cannot be empty")
    
        # Normalize query
        normalized = _normalize_query(query)
    
        # Clamp limit
        limit = min(max(limit, 1), 20)
    
        try:
            results = await _adapter.search(
                normalized,
                limit=limit,
                region=region,
                safe_search=safe_search,
            )
    
            # Filter out low-quality results
            filtered = [r for r in results if r.title and r.url and len(r.snippet) > 20]
    
            return [r.model_dump() for r in filtered]
    
        except SearchError:
            raise
        except Exception as e:
            raise SearchError(query, f"Unexpected error: {e}") from e
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the search engine (DuckDuckGo) and return format, but lacks details on behavioral traits such as rate limits, authentication needs, error handling, or whether it's a read-only operation. For a web search tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by structured sections for args and returns. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, and an output schema (implied by 'Returns'), the description is mostly complete. It covers purpose, parameters, and return values, but lacks behavioral context and usage guidelines. The output schema reduces the need to explain returns in detail, but gaps in other areas prevent a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining 'query' as a 'Search query string' and 'limit' with its range (1-20) and default (5), which aren't in the schema. However, it doesn't cover all potential semantics like query formatting or result ordering, keeping it from a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the web using DuckDuckGo.' It specifies the verb ('Search') and resource ('the web'), and mentions the search engine. However, it doesn't explicitly differentiate from sibling tools like 'tool_find_related' or 'tool_monitor_changes,' which might also involve web searching, so it doesn't reach a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'tool_find_related,' 'tool_deep_dive,' and 'tool_scrape_url,' there's no indication of context, prerequisites, or exclusions. It only describes what the tool does, not when it's appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Y4NN777/devlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server