Skip to main content
Glama

search

Search the web using Exa's index with filters for domain, date, category, and content. Retrieve results and optionally page contents.

Instructions

Perform a web search using Exa.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe search query string.
num_resultsNoNumber of search results to return (default: 10).
contentsNoOptions for retrieving page contents. Use False to disable.
include_domainsNoDomains to include in the search.
exclude_domainsNoDomains to exclude from the search.
start_crawl_dateNoOnly links crawled after this date (YYYY-MM-DD).
end_crawl_dateNoOnly links crawled before this date (YYYY-MM-DD).
start_published_dateNoOnly links published after this date (YYYY-MM-DD).
end_published_dateNoOnly links published before this date (YYYY-MM-DD).
include_textNoStrings that must appear in the page text.
exclude_textNoStrings that must not appear in the page text.
typeNoSearch type - 'auto', 'fast', 'deep', 'deep-reasoning', or 'instant'.
categoryNoData category to focus on (e.g., 'company', 'news', 'research_paper').
flagsNoExperimental flags for Exa usage.
moderationNoIf True, moderate search results for safety.
user_locationNoTwo-letter ISO country code for user location.
additional_queriesNoAlternative query formulations for deep search.
output_schemaNoJSON schema for deep search structured output.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler for the 'search' tool. Defined as an MCP tool via @mcp.tool() decorator. Accepts query and many optional parameters (num_results, contents, include_domains, etc.), builds an arguments dict, and delegates to the _call_mcp_tool helper to invoke 'web_search_exa' on the public Exa MCP endpoint.
    @mcp.tool()
    def search(
        query: str,
        num_results: int | None = None,
        contents: ContentsOptions | None = None,
        include_domains: list[str] | None = None,
        exclude_domains: list[str] | None = None,
        start_crawl_date: str | None = None,
        end_crawl_date: str | None = None,
        start_published_date: str | None = None,
        end_published_date: str | None = None,
        include_text: list[str] | None = None,
        exclude_text: list[str] | None = None,
        type: SearchType | None = None,
        category: Category | None = None,
        flags: list[str] | None = None,
        moderation: bool | None = None,
        user_location: str | None = None,
        additional_queries: list[str] | None = None,
        output_schema: JSONSchemaInput | None = None,
    ) -> dict[str, Any]:
        """Perform a web search using Exa.
    
        Args:
            query: The search query string.
            num_results: Number of search results to return (default: 10).
            contents: Options for retrieving page contents. Use False to disable.
            include_domains: Domains to include in the search.
            exclude_domains: Domains to exclude from the search.
            start_crawl_date: Only links crawled after this date (YYYY-MM-DD).
            end_crawl_date: Only links crawled before this date (YYYY-MM-DD).
            start_published_date: Only links published after this date (YYYY-MM-DD).
            end_published_date: Only links published before this date (YYYY-MM-DD).
            include_text: Strings that must appear in the page text.
            exclude_text: Strings that must not appear in the page text.
            type: Search type - 'auto', 'fast', 'deep', 'deep-reasoning', or 'instant'.
            category: Data category to focus on (e.g., 'company', 'news', 'research_paper').
            flags: Experimental flags for Exa usage.
            moderation: If True, moderate search results for safety.
            user_location: Two-letter ISO country code for user location.
            additional_queries: Alternative query formulations for deep search.
            output_schema: JSON schema for deep search structured output.
    
        Returns:
            Dict containing search results with optional contents.
    
        Example:
            >>> search("hottest AI startups", num_results=5)
            {"results": [{"title": "...", "url": "..."}]}
        """
        import asyncio
    
        if not query:
            raise ValueError("Query cannot be empty")
    
        arguments: dict[str, Any] = {"query": query}
        if num_results is not None:
            arguments["numResults"] = num_results
        if contents is not None:
            arguments["contents"] = contents
        if include_domains is not None:
            arguments["include_domains"] = include_domains
        if exclude_domains is not None:
            arguments["exclude_domains"] = exclude_domains
        if start_crawl_date is not None:
            arguments["start_crawl_date"] = start_crawl_date
        if end_crawl_date is not None:
            arguments["end_crawl_date"] = end_crawl_date
        if start_published_date is not None:
            arguments["start_published_date"] = start_published_date
        if end_published_date is not None:
            arguments["end_published_date"] = end_published_date
        if include_text is not None:
            arguments["include_text"] = include_text
        if exclude_text is not None:
            arguments["exclude_text"] = exclude_text
        if type is not None:
            arguments["type"] = type
        if category is not None:
            arguments["category"] = category
        if flags is not None:
            arguments["flags"] = flags
        if moderation is not None:
            arguments["moderation"] = moderation
        if user_location is not None:
            arguments["user_location"] = user_location
        if additional_queries is not None:
            arguments["additional_queries"] = additional_queries
        if output_schema is not None:
            arguments["output_schema"] = output_schema
    
        try:
            result = asyncio.get_event_loop().run_until_complete(
                _call_mcp_tool("web_search_exa", arguments)
            )
            return result
        except Exception as e:
            return {"error": str(e)}
  • Type aliases used as schema for the search tool: ContentsOptions (dict or False), SearchType (auto/fast/deep/deep-reasoning/instant), Category (company, news, research_paper, etc.), and JSONSchemaInput.
    ContentsOptions = Union[dict[str, Any], Literal[False]]
    SearchType = Literal["auto", "fast", "deep", "deep-reasoning", "instant"]
    Category = Literal[
        "company",
        "news",
        "research_paper",
        "pdf",
        "github",
        "hackernews",
        "video",
        "image",
    ]
    ResearchModel = Literal["exa-research-fast", "exa-research", "exa-research-pro"]
    JSONSchemaInput = dict[str, Any]
  • The MCP server instance created with fastmcp.FastMCP('mcp-exa'). The 'search' function is registered as a tool via the @mcp.tool() decorator on line 72.
    mcp = fastmcp.FastMCP("mcp-exa")
  • Helper function _call_mcp_tool that sends a JSON-RPC request to the public Exa MCP endpoint (https://mcp.exa.ai/mcp) to invoke the 'web_search_exa' tool, then parses the SSE stream response.
    async def _call_mcp_tool(tool_name: str, arguments: dict[str, Any]) -> dict[str, Any]:
        """Call a tool on the public Exa MCP server."""
        request = {
            "jsonrpc": "2.0",
            "id": 1,
            "method": "tools/call",
            "params": {
                "name": tool_name,
                "arguments": arguments,
            },
        }
    
        async with httpx.AsyncClient(timeout=60.0) as client:
            response = await client.post(
                f"{BASE_URL}/mcp",
                json=request,
                headers={
                    "accept": "application/json, text/event-stream",
                    "content-type": "application/json",
                },
            )
            response.raise_for_status()
            response_text = response.text
    
            lines = response_text.split("\n")
            for line in lines:
                if line.startswith("data: "):
                    data = line[6:]
                    result = {"jsonrpc": "2.0", "id": 1, "result": {}}
                    try:
                        parsed = eval(data)
                    except Exception:
                        pass
                    else:
                        if "result" in parsed and parsed["result"].get("content"):
                            return {
                                "results": parsed["result"]["content"][0].get("text", "")
                            }
    
            return {"results": ""}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only says 'using Exa,' revealing the service provider but omitting essential details like rate limits, authentication, or error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise but too minimal for a tool with 18 parameters. It front-loads the purpose but lacks structure and detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (18 parameters, output schema exists), the description is incomplete. It fails to explain the return format, output schema usage, or how to combine filters, relying entirely on the input schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add any parameter meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Perform a web search using Exa,' which identifies the verb and resource. However, it does not differentiate from sibling tools like 'answer' or 'find_similar' that may also involve searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description only states what it does without any context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/daedalus/mcp-exa'

If you have feedback or need assistance with the MCP directory API, please join our Discord server