Skip to main content
Glama

tavily-mcp

tavily-crawl

Initiate a structured web crawl from a base URL, following internal links with customizable depth, breadth, and focus. Extract content based on specific paths, domains, or predefined categories using advanced or basic extraction.

Instructions

A powerful web crawler that initiates a structured web crawl starting from a specified base URL. The crawler expands from that point like a tree, following internal links across pages. You can control how deep and wide it goes, and guide it to focus on specific sections of the site.

Input Schema

NameRequiredDescriptionDefault
allow_externalNoWhether to allow following links that go to external domains
categoriesNoFilter URLs using predefined categories like documentation, blog, api, etc
extract_depthNoAdvanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latencybasic
instructionsNoNatural language instructions for the crawler
limitNoTotal number of links the crawler will process before stopping
max_breadthNoMax number of links to follow per level of the tree (i.e., per page)
max_depthNoMax depth of the crawl. Defines how far from the base URL the crawler can explore.
select_domainsNoRegex patterns to select crawling to specific domains or subdomains (e.g., ^docs\.example\.com$)
select_pathsNoRegex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)
urlYesThe root URL to begin the crawl

Input Schema (JSON Schema)

{ "properties": { "allow_external": { "default": false, "description": "Whether to allow following links that go to external domains", "type": "boolean" }, "categories": { "default": [], "description": "Filter URLs using predefined categories like documentation, blog, api, etc", "items": { "enum": [ "Careers", "Blog", "Documentation", "About", "Pricing", "Community", "Developers", "Contact", "Media" ], "type": "string" }, "type": "array" }, "extract_depth": { "default": "basic", "description": "Advanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latency", "enum": [ "basic", "advanced" ], "type": "string" }, "instructions": { "description": "Natural language instructions for the crawler", "type": "string" }, "limit": { "default": 50, "description": "Total number of links the crawler will process before stopping", "minimum": 1, "type": "integer" }, "max_breadth": { "default": 20, "description": "Max number of links to follow per level of the tree (i.e., per page)", "minimum": 1, "type": "integer" }, "max_depth": { "default": 1, "description": "Max depth of the crawl. Defines how far from the base URL the crawler can explore.", "minimum": 1, "type": "integer" }, "select_domains": { "default": [], "description": "Regex patterns to select crawling to specific domains or subdomains (e.g., ^docs\\.example\\.com$)", "items": { "type": "string" }, "type": "array" }, "select_paths": { "default": [], "description": "Regex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)", "items": { "type": "string" }, "type": "array" }, "url": { "description": "The root URL to begin the crawl", "type": "string" } }, "required": [ "url" ], "type": "object" }
Install Server

Other Tools from tavily-mcp

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/jackedelic/tavily-mcp'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server