Skip to main content
Glama
julie-berlin

Tavily Web Search MCP Server

by julie-berlin

web_search

Search the web for information using natural language queries through the Tavily API to find answers and data from internet sources.

Instructions

Search the web for information about the given query

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • server.py:14-18 (handler)
    The handler function for the 'web_search' tool. Decorated with @mcp.tool() for registration, it defines the input schema implicitly via type hints (query: str) and docstring, executes the web search using the TavilyClient, and returns the search results as a string.
    @mcp.tool()
    def web_search(query: str) -> str:
        """Search the web for information about the given query"""
        search_results = web_search_client.get_search_context(query=query)
        return search_results
  • Initialization of the TavilyClient instance used by the web_search tool handler.
    web_search_client = TavilyClient(os.getenv("TAVILY_API_KEY"))
  • server.py:3-3 (helper)
    Import of the TavilyClient library required for web_search functionality.
    from tavily import TavilyClient
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Search the web' but doesn't describe traits like rate limits, authentication needs, result format, or potential side effects (e.g., network usage). This leaves significant gaps in understanding how the tool behaves beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without any unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly, which is ideal for a simple tool like this.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter) and the presence of an output schema, the description is somewhat complete but lacks depth. It covers the basic purpose but misses behavioral details and usage guidelines, which are important for an agent to use it effectively in varied contexts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal meaning beyond the input schema, which has 0% description coverage. It implies the 'query' parameter is for searching, but doesn't elaborate on syntax, constraints, or examples. With one parameter and low schema coverage, the description provides some context but doesn't fully compensate for the lack of detailed parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search the web') and the resource ('information about the given query'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'get_exchange_rate' or 'roll_dice', which are unrelated, so it doesn't need to distinguish but could mention uniqueness if relevant.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives or in what contexts it's appropriate. It simply states what it does without any usage context, prerequisites, or exclusions, leaving the agent to infer based on the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/julie-berlin/pub-aie7-mcp-session'

If you have feedback or need assistance with the MCP directory API, please join our Discord server