tavily-crawl
Initiate a structured web crawl from a specified URL, controlling depth, breadth, and focus on specific sections or domains using regex and predefined categories. Extract content in markdown or text format for targeted data retrieval.
Instructions
A powerful web crawler that initiates a structured web crawl starting from a specified base URL. The crawler expands from that point like a tree, following internal links across pages. You can control how deep and wide it goes, and guide it to focus on specific sections of the site.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| allow_external | No | Whether to allow following links that go to external domains | |
| categories | No | Filter URLs using predefined categories like documentation, blog, api, etc | |
| extract_depth | No | Advanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latency | basic |
| format | No | The format of the extracted web page content. markdown returns content in markdown format. text returns plain text and may increase latency. | markdown |
| instructions | Yes | Natural language instructions for the crawler | |
| limit | No | Total number of links the crawler will process before stopping | |
| max_breadth | No | Max number of links to follow per level of the tree (i.e., per page) | |
| max_depth | No | Max depth of the crawl. Defines how far from the base URL the crawler can explore. | |
| select_domains | No | Regex patterns to select crawling to specific domains or subdomains (e.g., ^docs\.example\.com$) | |
| select_paths | No | Regex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*) | |
| url | Yes | Root URL to begin the crawl |
Input Schema (JSON Schema)
{
"properties": {
"allow_external": {
"default": false,
"description": "Whether to allow following links that go to external domains",
"title": "Allow External",
"type": "boolean"
},
"categories": {
"description": "Filter URLs using predefined categories like documentation, blog, api, etc",
"items": {
"enum": [
"Careers",
"Blog",
"Documentation",
"About",
"Pricing",
"Community",
"Developers",
"Contact",
"Media"
],
"type": "string"
},
"title": "Categories",
"type": "array"
},
"extract_depth": {
"default": "basic",
"description": "Advanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latency",
"enum": [
"basic",
"advanced"
],
"title": "Extract Depth",
"type": "string"
},
"format": {
"default": "markdown",
"description": "The format of the extracted web page content. markdown returns content in markdown format. text returns plain text and may increase latency.",
"enum": [
"markdown",
"text"
],
"title": "Format",
"type": "string"
},
"instructions": {
"description": "Natural language instructions for the crawler",
"title": "Instructions",
"type": "string"
},
"limit": {
"default": 50,
"description": "Total number of links the crawler will process before stopping",
"minimum": 1,
"title": "Limit",
"type": "integer"
},
"max_breadth": {
"default": 20,
"description": "Max number of links to follow per level of the tree (i.e., per page)",
"minimum": 1,
"title": "Max Breadth",
"type": "integer"
},
"max_depth": {
"default": 1,
"description": "Max depth of the crawl. Defines how far from the base URL the crawler can explore.",
"minimum": 1,
"title": "Max Depth",
"type": "integer"
},
"select_domains": {
"description": "Regex patterns to select crawling to specific domains or subdomains (e.g., ^docs\\.example\\.com$)",
"items": {
"type": "string"
},
"title": "Select Domains",
"type": "array"
},
"select_paths": {
"description": "Regex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)",
"items": {
"type": "string"
},
"title": "Select Paths",
"type": "array"
},
"url": {
"description": "Root URL to begin the crawl",
"title": "Url",
"type": "string"
}
},
"required": [
"url",
"instructions"
],
"type": "object"
}
Implementation Reference
- src/tavily_mcp_sse/server.py:202-280 (handler)The handler function for 'tavily-crawl' tool. It defines input parameters with descriptions and defaults, makes an HTTP POST request to Tavily's crawl API with the parameters, handles errors, validates the response using TavilyCrawlResponse Pydantic model, and returns the dumped model.@mcp_server.tool(name='tavily-crawl') async def crawl( url: Annotated[str, Field( description="""Root URL to begin the crawl""" )], instructions: Annotated[str, Field( description="""Natural language instructions for the crawler""" )], max_depth: Annotated[int, Field( default=1, ge=1, description="""Max depth of the crawl. Defines how far from the base URL the crawler can explore.""" )], max_breadth: Annotated[int, Field( default=20, ge=1, description="""Max number of links to follow per level of the tree (i.e., per page)""" )], limit: Annotated[int, Field( default=50, ge=1, description="""Total number of links the crawler will process before stopping""" )], select_paths: Annotated[list[str], Field( default_factory=list, description="""Regex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)""" )], select_domains: Annotated[list[str], Field( default_factory=list, description="""Regex patterns to select crawling to specific domains or subdomains (e.g., ^docs\\.example\\.com$)""" )], allow_external: Annotated[bool, Field( default=False, description="""Whether to allow following links that go to external domains""" )], categories: Annotated[list[CrawlCategoriesLiteral], Field( default_factory=list, description="""Filter URLs using predefined categories like documentation, blog, api, etc""" )], extract_depth: Annotated[ExtractDepthLiteral, Field( default="basic", description="Advanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latency" )], format: Annotated[FormatLiteral, Field( default="markdown", description="""The format of the extracted web page content. markdown returns content in markdown format. text returns plain text and may increase latency.""" )] ) -> dict[str, Any]: """A powerful web crawler that initiates a structured web crawl starting from a specified base URL. The crawler expands from that point like a tree, following internal links across pages. You can control how deep and wide it goes, and guide it to focus on specific sections of the site.""" endpoint = base_urls['crawl'] search_params = { "url": url, "instructions": instructions, "max_depth": max_depth, "max_breadth": max_breadth, "limit": limit, "select_paths": select_paths, "select_domains": select_domains, "allow_external": allow_external, "categories": categories, "extract_depth": extract_depth, "format": format, "api_key": TAVILY_API_KEY, } try: async with httpx.AsyncClient(headers=headers) as client: response = await client.post(endpoint, json=search_params) if not response.is_success: if response.status_code == 401: raise ValueError("Invalid API Key") elif response.status_code == 429: raise ValueError("Usage limit exceeded") _ = response.raise_for_status() except BaseException as e: raise e response_dict: dict[str, Any] = response.json() return TavilyCrawlResponse.model_validate(response_dict).model_dump()
- src/tavily_mcp_sse/schemas.py:40-49 (schema)Pydantic schemas for the output of the tavily-crawl tool, including the main TavilyCrawlResponse model and nested CrawlResult model used for response validation.# Tavily Crawl Response Schema class CrawlResult(BaseModel): url: str raw_content: str class TavilyCrawlResponse(BaseModel): base_url: str results: list[CrawlResult] response_time: float
- src/tavily_mcp_sse/server.py:202-202 (registration)The decorator that registers the crawl function as the 'tavily-crawl' tool in the FastMCP server.@mcp_server.tool(name='tavily-crawl')