Skip to main content
Glama
iKwesi

Tavily Web Search MCP Server

by iKwesi

web_search

Search the web for information using natural language queries through the Tavily API. This tool retrieves search results to answer questions and provide context.

Instructions

Search the web for information using Tavily API.

Args: query: Search query string

Returns: Search results context from Tavily

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • server.py:43-59 (handler)
    The main handler function for the 'web_search' tool, decorated with @mcp.tool() for registration. It uses TavilyClient to perform the web search and returns the results or an error message.
    @mcp.tool()
    def web_search(query: str) -> str:
        """
        Search the web for information using Tavily API.
        
        Args:
            query: Search query string
            
        Returns:
            Search results context from Tavily
        """
        try:
            search_results = tavily_client.get_search_context(query=query)
            return search_results
        except Exception as e:
            return f"Error performing web search: {str(e)}"
  • server.py:43-43 (registration)
    The @mcp.tool() decorator registers the web_search function as an MCP tool.
    @mcp.tool()
  • Initialization of the TavilyClient used by the web_search tool.
    tavily_client = TavilyClient(os.getenv("TAVILY_API_KEY"))
  • Helper logic in the LangGraph tool executor node that invokes the web_search MCP tool.
    elif tool_name == "web_search":
        log_tool_call("web_search", {
            "query": context["query"],
            "action": action
        })
        
        result = await tool.ainvoke({"query": context["query"]})
        
        log_tool_result("web_search", result)
        return result
  • No, wrong. Wait, for web_search the schema is inferred from def web_search(query: str) -> str:
            num_rolls: Number of times to roll (default 1)
            
        Returns:
            Formatted dice roll results
        """
        try:
            roller = DiceRoller(notation, num_rolls)
            return str(roller)
        except ValueError as e:
            return f"Invalid dice notation: {str(e)}"
        except Exception as e:
            return f"Error rolling dice: {str(e)}"
    
    
    # ============================================================================
    # SERVER STARTUP
    # ============================================================================
    
    if __name__ == "__main__":
        print("\n" + "=" * 60)
        print("🚀 Starting MCP Server")
        print("=" * 60)
        print("\nActive Tools:")
        print("  ✅ ask_specialized_claude (Meta-AI)")
        print("  ✅ web_search (Tavily)")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions using Tavily API but doesn't describe key behavioral traits like rate limits, authentication needs, response format details beyond 'Search results context', or potential limitations (e.g., result freshness, source reliability). This leaves significant gaps for a tool performing external queries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first. The 'Args' and 'Returns' sections add structure, though they could be integrated more seamlessly. Every sentence earns its place, but minor improvements in flow could enhance clarity without adding bulk.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (external API call), lack of annotations, and presence of an output schema, the description is partially complete. It covers the basic purpose and parameters but misses behavioral details like error handling or usage constraints. The output schema existence reduces the need to explain return values, but more context on operation traits would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal semantics beyond the input schema. It defines 'query' as 'Search query string', which aligns with the schema's title 'Query' and type 'string'. With 0% schema description coverage, the description compensates slightly but doesn't elaborate on query formatting, length limits, or examples. The baseline is 3 since it provides basic meaning but lacks depth.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the web for information using Tavily API.' It specifies the verb ('Search') and resource ('the web'), and mentions the underlying API. However, it doesn't explicitly differentiate from sibling tools like 'ask_specialized_claude' or 'roll_dice', which serve different purposes but aren't directly comparable search alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or other search methods, nor does it specify contexts where web search is appropriate versus when it might not be (e.g., for internal data). Usage is implied by the purpose but lacks explicit guidelines.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/iKwesi/AIE8-MCP-Session'

If you have feedback or need assistance with the MCP directory API, please join our Discord server