Skip to main content
Glama

tool_extract_links

Extract all links from a web page to analyze navigation structure and discover resources, with options to filter external links for focused analysis.

Instructions

Extract all links from a page.

Useful for discovering navigation structure and resources.

Args: url: URL to extract links from. filter_external: Only return same-domain links (default True).

Returns: Organized list of internal and external links.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes
filter_externalNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The actual implementation of the link extraction logic.
    async def extract_links(url: str, *, filter_external: bool = True) -> str:
        """Extract all links from a page.
    
        Args:
            url: URL to extract links from.
            filter_external: Only return same-domain links.
    
        Returns:
            Markdown list of links organized by type.
    
        Example:
            >>> links = await extract_links("https://example.com")
        """
        try:
            import httpx
            from bs4 import BeautifulSoup
    
            async with httpx.AsyncClient(timeout=15, follow_redirects=True) as client:
                resp = await client.get(url)
                resp.raise_for_status()
                html = resp.text
    
            soup = BeautifulSoup(html, "html.parser")
            base_domain = urlparse(url).netloc
    
            # Categorize links
            from urllib.parse import urljoin
    
            internal_links: list[tuple[str, str]] = []  # (url, text)
            external_links: list[tuple[str, str]] = []
    
            for a in soup.find_all("a", href=True):
                href = a["href"]
                text = a.get_text(strip=True) or href
                absolute_url = urljoin(url, href)
                parsed = urlparse(absolute_url)
    
                if parsed.scheme in ("http", "https"):
                    if parsed.netloc == base_domain:
                        internal_links.append((absolute_url, text))
                    else:
                        external_links.append((absolute_url, text))
    
            # Build report
            report_lines = [
                f"# Links from {url}\n",
                f"## Internal Links ({len(internal_links)})\n",
            ]
    
            # Deduplicate and sort
            internal_links = sorted(set(internal_links), key=lambda x: x[1].lower())
            external_links = sorted(set(external_links), key=lambda x: x[1].lower())
    
            for link_url, text in internal_links[:50]:  # Limit to 50
                report_lines.append(f"- [{text}]({link_url})")
    
            if not filter_external and external_links:
                report_lines.append(f"\n## External Links ({len(external_links)})\n")
                for link_url, text in external_links[:30]:  # Limit to 30
                    report_lines.append(f"- [{text}]({link_url})")
    
            return "\n".join(report_lines)
  • The MCP tool wrapper function that calls extract_links.
    async def tool_extract_links(url: str, filter_external: bool = True) -> str:
        """Extract all links from a page.
    
        Useful for discovering navigation structure and resources.
    
        Args:
            url: URL to extract links from.
            filter_external: Only return same-domain links (default True).
    
        Returns:
            Organized list of internal and external links.
        """
        return await extract_links(url, filter_external=filter_external)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool extracts links and organizes them, but lacks details on permissions, rate limits, error handling, or whether it follows redirects. For a web tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with zero waste. It starts with the core purpose, adds a usage tip, details parameters with explanations, and specifies the return value—all in a few sentences that are front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but has an output schema), the description is fairly complete. It covers purpose, usage, parameters, and return values. However, with no annotations, it could benefit from more behavioral details like error cases or performance notes, though the output schema reduces the need to explain return formats.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds substantial meaning beyond the input schema, which has 0% description coverage. It explains that 'url' is the 'URL to extract links from' and 'filter_external' controls whether to 'Only return same-domain links (default True).' This fully compensates for the schema's lack of descriptions, making parameter purposes clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Extract all links from a page.' It specifies the verb ('extract') and resource ('links from a page'), making it easy to understand. However, it doesn't explicitly differentiate from siblings like 'tool_scrape_url' or 'tool_crawl_docs', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: 'Useful for discovering navigation structure and resources.' This implies when to use it but doesn't explicitly state when not to use it or mention alternatives among sibling tools. For example, it doesn't clarify if this is for static extraction vs. dynamic crawling compared to 'tool_crawl_docs'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Y4NN777/devlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server