Skip to main content
Glama

crawl

Extract content from websites by crawling multiple pages from a starting URL, with configurable depth and page limits for structured data collection.

Instructions

Crawls a website starting from the specified URL and extracts content from multiple pages. Args: - url: The complete URL of the web page to start crawling from - maxDepth: The maximum depth level for crawling linked pages - limit: The maximum number of pages to crawl

Returns:
- Content extracted from the crawled pages in markdown and HTML format

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes
maxDepthYes
limitYes

Implementation Reference

  • main.py:39-54 (registration)
    Registration and handler wrapper for the 'crawl' MCP tool, which delegates to WebTools.crawl implementation.
    @mcp.tool()
    async def crawl(url: str, maxDepth: int, limit: int) -> str:
        """Crawls a website starting from the specified URL and extracts content from multiple pages.
        Args:
        - url: The complete URL of the web page to start crawling from
        - maxDepth: The maximum depth level for crawling linked pages
        - limit: The maximum number of pages to crawl
    
        Returns:
        - Content extracted from the crawled pages in markdown and HTML format
        """
        try:
            crawl_results = webtools.crawl(url, maxDepth, limit)
            return crawl_results
        except Exception as e:
            return f"Error crawling pages: {str(e)}"
  • Core implementation of the crawl functionality using FirecrawlApp.crawl_url, handling parameters for limit, maxDepth, and formats.
    def crawl(self, url: str, maxDepth: int, limit: int):
        try:
            crawl_page = self.firecrawl.crawl_url(
                url,
                params={
                    "limit": limit,
                    "maxDepth": maxDepth,
                    "scrapeOptions": {"formats": ["markdown", "html"]},
                },
                poll_interval=30,
            )
            return crawl_page
        except Exception as e:
            return f"Error crawling pages: {str(e)}"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool crawls and extracts content, implying it performs read operations, but lacks details on permissions, rate limits, potential impacts on target sites, or error handling. For a web crawling tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: a concise opening sentence states the purpose, followed by a bulleted list for args and returns. Every sentence earns its place by delivering essential information without redundancy, making it easy to parse and front-loaded with key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (web crawling with 3 parameters), no annotations, and no output schema, the description is moderately complete. It covers the basic purpose and parameters but lacks details on behavioral traits, error cases, or output format specifics beyond 'markdown and HTML format'. This is adequate for a minimal viable description but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds substantial meaning beyond the input schema, which has 0% description coverage. It explains each parameter's purpose: 'url' as the starting point, 'maxDepth' for crawl depth, and 'limit' for page count. This compensates well for the schema's lack of descriptions, providing clear semantics for all three parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Crawls a website starting from the specified URL and extracts content from multiple pages.' It specifies the verb ('crawls'), resource ('website'), and scope ('extracts content from multiple pages'), making the action clear. However, it doesn't explicitly differentiate from sibling tools like 'extract' or 'scrape', which likely have overlapping functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'extract' or 'scrape'. It mentions the tool's function but offers no context about prerequisites, exclusions, or comparative use cases. This leaves the agent without clear direction for tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/josemartinrodriguezmortaloni/webSearch-Tools'

If you have feedback or need assistance with the MCP directory API, please join our Discord server