search
Search the web using Exa's index with filters for domain, date, category, and content. Retrieve results and optionally page contents.
Instructions
Perform a web search using Exa.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The search query string. | |
| num_results | No | Number of search results to return (default: 10). | |
| contents | No | Options for retrieving page contents. Use False to disable. | |
| include_domains | No | Domains to include in the search. | |
| exclude_domains | No | Domains to exclude from the search. | |
| start_crawl_date | No | Only links crawled after this date (YYYY-MM-DD). | |
| end_crawl_date | No | Only links crawled before this date (YYYY-MM-DD). | |
| start_published_date | No | Only links published after this date (YYYY-MM-DD). | |
| end_published_date | No | Only links published before this date (YYYY-MM-DD). | |
| include_text | No | Strings that must appear in the page text. | |
| exclude_text | No | Strings that must not appear in the page text. | |
| type | No | Search type - 'auto', 'fast', 'deep', 'deep-reasoning', or 'instant'. | |
| category | No | Data category to focus on (e.g., 'company', 'news', 'research_paper'). | |
| flags | No | Experimental flags for Exa usage. | |
| moderation | No | If True, moderate search results for safety. | |
| user_location | No | Two-letter ISO country code for user location. | |
| additional_queries | No | Alternative query formulations for deep search. | |
| output_schema | No | JSON schema for deep search structured output. |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/mcp_exa/_server.py:72-169 (handler)The main handler for the 'search' tool. Defined as an MCP tool via @mcp.tool() decorator. Accepts query and many optional parameters (num_results, contents, include_domains, etc.), builds an arguments dict, and delegates to the _call_mcp_tool helper to invoke 'web_search_exa' on the public Exa MCP endpoint.
@mcp.tool() def search( query: str, num_results: int | None = None, contents: ContentsOptions | None = None, include_domains: list[str] | None = None, exclude_domains: list[str] | None = None, start_crawl_date: str | None = None, end_crawl_date: str | None = None, start_published_date: str | None = None, end_published_date: str | None = None, include_text: list[str] | None = None, exclude_text: list[str] | None = None, type: SearchType | None = None, category: Category | None = None, flags: list[str] | None = None, moderation: bool | None = None, user_location: str | None = None, additional_queries: list[str] | None = None, output_schema: JSONSchemaInput | None = None, ) -> dict[str, Any]: """Perform a web search using Exa. Args: query: The search query string. num_results: Number of search results to return (default: 10). contents: Options for retrieving page contents. Use False to disable. include_domains: Domains to include in the search. exclude_domains: Domains to exclude from the search. start_crawl_date: Only links crawled after this date (YYYY-MM-DD). end_crawl_date: Only links crawled before this date (YYYY-MM-DD). start_published_date: Only links published after this date (YYYY-MM-DD). end_published_date: Only links published before this date (YYYY-MM-DD). include_text: Strings that must appear in the page text. exclude_text: Strings that must not appear in the page text. type: Search type - 'auto', 'fast', 'deep', 'deep-reasoning', or 'instant'. category: Data category to focus on (e.g., 'company', 'news', 'research_paper'). flags: Experimental flags for Exa usage. moderation: If True, moderate search results for safety. user_location: Two-letter ISO country code for user location. additional_queries: Alternative query formulations for deep search. output_schema: JSON schema for deep search structured output. Returns: Dict containing search results with optional contents. Example: >>> search("hottest AI startups", num_results=5) {"results": [{"title": "...", "url": "..."}]} """ import asyncio if not query: raise ValueError("Query cannot be empty") arguments: dict[str, Any] = {"query": query} if num_results is not None: arguments["numResults"] = num_results if contents is not None: arguments["contents"] = contents if include_domains is not None: arguments["include_domains"] = include_domains if exclude_domains is not None: arguments["exclude_domains"] = exclude_domains if start_crawl_date is not None: arguments["start_crawl_date"] = start_crawl_date if end_crawl_date is not None: arguments["end_crawl_date"] = end_crawl_date if start_published_date is not None: arguments["start_published_date"] = start_published_date if end_published_date is not None: arguments["end_published_date"] = end_published_date if include_text is not None: arguments["include_text"] = include_text if exclude_text is not None: arguments["exclude_text"] = exclude_text if type is not None: arguments["type"] = type if category is not None: arguments["category"] = category if flags is not None: arguments["flags"] = flags if moderation is not None: arguments["moderation"] = moderation if user_location is not None: arguments["user_location"] = user_location if additional_queries is not None: arguments["additional_queries"] = additional_queries if output_schema is not None: arguments["output_schema"] = output_schema try: result = asyncio.get_event_loop().run_until_complete( _call_mcp_tool("web_search_exa", arguments) ) return result except Exception as e: return {"error": str(e)} - src/mcp_exa/_server.py:14-27 (schema)Type aliases used as schema for the search tool: ContentsOptions (dict or False), SearchType (auto/fast/deep/deep-reasoning/instant), Category (company, news, research_paper, etc.), and JSONSchemaInput.
ContentsOptions = Union[dict[str, Any], Literal[False]] SearchType = Literal["auto", "fast", "deep", "deep-reasoning", "instant"] Category = Literal[ "company", "news", "research_paper", "pdf", "github", "hackernews", "video", "image", ] ResearchModel = Literal["exa-research-fast", "exa-research", "exa-research-pro"] JSONSchemaInput = dict[str, Any] - src/mcp_exa/_server.py:10-10 (registration)The MCP server instance created with fastmcp.FastMCP('mcp-exa'). The 'search' function is registered as a tool via the @mcp.tool() decorator on line 72.
mcp = fastmcp.FastMCP("mcp-exa") - src/mcp_exa/_server.py:30-69 (helper)Helper function _call_mcp_tool that sends a JSON-RPC request to the public Exa MCP endpoint (https://mcp.exa.ai/mcp) to invoke the 'web_search_exa' tool, then parses the SSE stream response.
async def _call_mcp_tool(tool_name: str, arguments: dict[str, Any]) -> dict[str, Any]: """Call a tool on the public Exa MCP server.""" request = { "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": tool_name, "arguments": arguments, }, } async with httpx.AsyncClient(timeout=60.0) as client: response = await client.post( f"{BASE_URL}/mcp", json=request, headers={ "accept": "application/json, text/event-stream", "content-type": "application/json", }, ) response.raise_for_status() response_text = response.text lines = response_text.split("\n") for line in lines: if line.startswith("data: "): data = line[6:] result = {"jsonrpc": "2.0", "id": 1, "result": {}} try: parsed = eval(data) except Exception: pass else: if "result" in parsed and parsed["result"].get("content"): return { "results": parsed["result"]["content"][0].get("text", "") } return {"results": ""}