Skip to main content
Glama

tavily-crawl

Crawl websites starting from a base URL to map site structure, follow internal links, and extract content with controlled depth and breadth parameters for comprehensive web analysis.

Instructions

A powerful web crawler that initiates a structured web crawl starting from a specified base URL. The crawler expands from that point like a tree, following internal links across pages. You can control how deep and wide it goes, and guide it to focus on specific sections of the site.

Input Schema

NameRequiredDescriptionDefault
urlYesThe root URL to begin the crawl
max_depthNoMax depth of the crawl. Defines how far from the base URL the crawler can explore.
max_breadthNoMax number of links to follow per level of the tree (i.e., per page)
limitNoTotal number of links the crawler will process before stopping
instructionsNoNatural language instructions for the crawler
select_pathsNoRegex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)
select_domainsNoRegex patterns to select crawling to specific domains or subdomains (e.g., ^docs\.example\.com$)
allow_externalNoWhether to allow following links that go to external domains
categoriesNoFilter URLs using predefined categories like documentation, blog, api, etc
extract_depthNoAdvanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latencybasic

Input Schema (JSON Schema)

{ "properties": { "allow_external": { "default": false, "description": "Whether to allow following links that go to external domains", "type": "boolean" }, "categories": { "default": [], "description": "Filter URLs using predefined categories like documentation, blog, api, etc", "items": { "enum": [ "Careers", "Blog", "Documentation", "About", "Pricing", "Community", "Developers", "Contact", "Media" ], "type": "string" }, "type": "array" }, "extract_depth": { "default": "basic", "description": "Advanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latency", "enum": [ "basic", "advanced" ], "type": "string" }, "instructions": { "description": "Natural language instructions for the crawler", "type": "string" }, "limit": { "default": 50, "description": "Total number of links the crawler will process before stopping", "minimum": 1, "type": "integer" }, "max_breadth": { "default": 20, "description": "Max number of links to follow per level of the tree (i.e., per page)", "minimum": 1, "type": "integer" }, "max_depth": { "default": 1, "description": "Max depth of the crawl. Defines how far from the base URL the crawler can explore.", "minimum": 1, "type": "integer" }, "select_domains": { "default": [], "description": "Regex patterns to select crawling to specific domains or subdomains (e.g., ^docs\\.example\\.com$)", "items": { "type": "string" }, "type": "array" }, "select_paths": { "default": [], "description": "Regex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)", "items": { "type": "string" }, "type": "array" }, "url": { "description": "The root URL to begin the crawl", "type": "string" } }, "required": [ "url" ], "type": "object" }

Other Tools from Tavily MCP Server

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jeetanshu18/tavily-mcp'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server