Skip to main content
Glama

tool_crawl_docs

Crawl and combine multi-page documentation from a starting URL into a single Markdown document with table of contents for efficient reference.

Instructions

Crawl multi-page documentation.

Follows same-domain links to build combined docs.

Args: root_url: Starting URL. max_pages: Max pages to crawl (1-20, default 5).

Returns: Combined Markdown with table of contents.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
root_urlYes
max_pagesNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core implementation of the crawler logic that fetches and aggregates documentation pages.
    async def crawl_docs(
        root_url: str, max_pages: int = 5, *, follow_external: bool = False
    ) -> str:
        """Crawl documentation starting from a root URL.
    
        Follows same-domain links to build a combined document with
        table of contents.
    
        Args:
            root_url: Starting URL for crawl.
            max_pages: Maximum pages to crawl (1-20).
            follow_external: Allow following external links (not recommended).
    
        Returns:
            Combined Markdown with table of contents.
    
        Example:
            >>> docs = await crawl_docs("https://docs.python.org/3/library/asyncio.html")
        """
        from urllib.parse import urlparse
    
        max_pages = min(max(max_pages, 1), 20)
        visited: set[str] = set()
        to_visit: list[str] = [root_url]
        pages: list[tuple[str, str, str]] = []  # (url, title, content)
        root_domain = urlparse(root_url).netloc
    
        while to_visit and len(visited) < max_pages:
            url = to_visit.pop(0)
    
            if url in visited:
                continue
    
            # Skip non-documentation URLs
            if any(
                skip in url.lower()
                for skip in ["login", "signup", "download", "print", ".pdf", ".zip"]
            ):
                continue
    
            try:
                doc = await _adapter.fetch(url, retry=1)  # Less retries for crawling
                visited.add(url)
                pages.append((url, doc.title, doc.content))
    
                # Find more links
                async with asyncio.timeout(10):
                    import httpx
    
                    async with httpx.AsyncClient(
                        timeout=10, follow_redirects=True
                    ) as client:
                        resp = await client.get(url)
                        links = _adapter.get_same_domain_links(resp.text, url)
    
                        # Filter links
                        for link in links:
                            if link in visited or link in to_visit:
                                continue
    
                            # Check domain restriction
                            if not follow_external:
                                link_domain = urlparse(link).netloc
                                if link_domain != root_domain:
                                    continue
    
                            # Prioritize docs-like URLs
  • The MCP tool registration for 'tool_crawl_docs' which serves as the entry point wrapper for the crawl_docs function.
    @mcp.tool()
    async def tool_crawl_docs(root_url: str, max_pages: int = 5) -> str:
        """Crawl multi-page documentation.
    
        Follows same-domain links to build combined docs.
    
        Args:
            root_url: Starting URL.
            max_pages: Max pages to crawl (1-20, default 5).
    
        Returns:
            Combined Markdown with table of contents.
        """
        return await crawl_docs(root_url, max_pages)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behaviors: it crawls same-domain links, combines content into Markdown with a table of contents, and has a max pages limit. However, it lacks details on rate limits, error handling, authentication needs, or content processing constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by brief behavioral notes and clear parameter/return sections. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (crawling and combining docs), no annotations, and an output schema (implied by 'Returns'), the description is mostly complete. It covers purpose, behavior, parameters, and returns, but could include more on limitations or prerequisites for a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning for both parameters: root_url as the 'starting URL' and max_pages with its range (1-20) and default (5). This goes beyond the bare schema, though it could elaborate on URL format or crawling depth.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('crawl', 'follows', 'build') and resources ('multi-page documentation', 'same-domain links', 'combined docs'), distinguishing it from siblings like tool_scrape_url (single URL) or tool_extract_links (link extraction without content combination).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for building comprehensive documentation from linked pages on the same domain, but does not explicitly state when not to use it or name alternatives like tool_scrape_url for single pages or tool_search_web for broader searches. The context is clear but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Y4NN777/devlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server