Skip to main content
Glama

tool_find_related

Discover web pages with similar content to a given URL by analyzing page content to identify related resources for research and development.

Instructions

Find pages related to a given URL.

Uses the page content to discover similar resources.

Args: url: Base URL to find related content for. limit: Max related pages (1-10, default 5).

Returns: List of related pages with descriptions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core implementation of the logic for finding related pages. It fetches content from the provided URL, extracts a search query, performs a web search, and returns a formatted report of related URLs.
    async def find_related(url: str, limit: int = 5) -> str:
        """Find related pages to a given URL.
    
        Args:
            url: Base URL to find related content for.
            limit: Maximum related pages (1-10).
    
        Returns:
            List of related pages with descriptions.
    
        Example:
            >>> related = await find_related("https://docs.python.org/3/library/asyncio.html")
        """
        limit = min(max(limit, 1), 10)
    
        # Extract topic from URL
        try:
            doc = await _scraper.fetch(url, retry=1)
            # Use title as search query
            search_query = f"{doc.title} related documentation"
        except Exception:
            # Fallback to URL-based query
            parsed = urlparse(url)
            path_parts = parsed.path.strip("/").split("/")
            search_query = " ".join(path_parts[-2:] if len(path_parts) > 1 else path_parts)
    
        # Search for related content
        try:
            results = await _ddg.search(search_query, limit=limit + 5)
        except SearchError:
            return f"# Related Pages\n\nFailed to find related content for: {url}"
    
        # Filter out the original URL
        related = [r for r in results if r.url != url][:limit]
    
        if not related:
            return f"# Related Pages\n\nNo related pages found for: {url}"
    
        # Build report
        report_lines = [
            "# Related Pages\n",
            f"> Based on: {url}\n",
            "## Recommendations\n",
        ]
    
        for i, r in enumerate(related, 1):
            report_lines.append(f"\n### {i}. {r.title}\n")
            report_lines.append(f"**URL**: {r.url}\n")
            report_lines.append(f"{r.snippet}\n")
    
        return "\n".join(report_lines)
  • The MCP-exposed wrapper function for the `tool_find_related` tool, which calls the underlying `find_related` implementation.
    async def tool_find_related(url: str, limit: int = 5) -> str:
        """Find pages related to a given URL.
    
        Uses the page content to discover similar resources.
    
        Args:
            url: Base URL to find related content for.
            limit: Max related pages (1-10, default 5).
    
        Returns:
            List of related pages with descriptions.
        """
        return await find_related(url, limit)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool 'uses the page content to discover similar resources,' which hints at a read-only, non-destructive operation, but lacks details on permissions, rate limits, error handling, or what 'similar' means (e.g., semantic similarity, shared topics). For a tool with no annotations, this is insufficient to fully understand its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by a brief method explanation, then structured parameter and return details. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (implied by 'Returns: List of related pages with descriptions'), the description doesn't need to detail return values. It covers the purpose, parameters, and basic behavior adequately. However, with no annotations and multiple sibling tools, it could benefit from more context on differentiation and operational limits to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful semantics beyond the input schema, which has 0% description coverage. It explains that 'url' is the 'Base URL to find related content for' and 'limit' is the 'Max related pages (1-10, default 5),' including the range and default value not evident in the schema. This compensates well for the low schema coverage, though it doesn't detail URL format or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find pages related to a given URL' with the method 'Uses the page content to discover similar resources.' This specifies the verb (find), resource (pages), and mechanism (content-based similarity). However, it does not explicitly differentiate from siblings like 'tool_extract_links' or 'tool_search_web,' which might have overlapping functionality, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings such as 'tool_search_web' and 'tool_extract_links,' it's unclear if this tool is for content-based similarity, link extraction, or broader web searches. There are no explicit when/when-not statements or named alternatives, leaving usage context implied at best.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Y4NN777/devlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server