Skip to main content
Glama
praveenc
by praveenc

fetch_mcp_doc

Fetch full documentation content from MCP protocol or FastMCP framework URLs. Get complete protocol specs, tutorials, and instructions when search snippets are insufficient.

Instructions

Fetch full document content by URL from MCP protocol or FastMCP framework docs.

Retrieves complete documentation content from URLs found via search_mcp_docs or provided directly. Works with both documentation sources:

Supported domains:

  • modelcontextprotocol.io - Official MCP protocol specification

  • gofastmcp.com - FastMCP Python framework documentation

Use this to get full documentation pages when search snippets aren't sufficient, including:

  • Complete protocol specifications and API references

  • Full tutorial and example code

  • Configuration, authentication, and deployment instructions

Args: uri: Document URI (http/https URLs from supported domains)

Returns: Dictionary containing: - url: Canonical document URL - title: Document title - content: Full document text content - source: Documentation source ("mcp" or "fastmcp") - error: Error message (only present if fetch failed)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
uriYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for the fetch_mcp_doc tool. It takes a URI, ensures the cache is ready, calls cache.ensure_page() to fetch and cache the document (with domain validation via url_validator), then returns a dict with url, title, content, and source (or an error if fetch fails).
    def fetch_mcp_doc(uri: str) -> dict[str, Any]:
        """Fetch full document content by URL from MCP protocol or FastMCP framework docs.
    
        Retrieves complete documentation content from URLs found via search_mcp_docs
        or provided directly. Works with both documentation sources:
    
        **Supported domains:**
        - modelcontextprotocol.io - Official MCP protocol specification
        - gofastmcp.com - FastMCP Python framework documentation
    
        Use this to get full documentation pages when search snippets aren't
        sufficient, including:
        - Complete protocol specifications and API references
        - Full tutorial and example code
        - Configuration, authentication, and deployment instructions
    
        Args:
            uri: Document URI (http/https URLs from supported domains)
    
        Returns:
            Dictionary containing:
            - url: Canonical document URL
            - title: Document title
            - content: Full document text content
            - source: Documentation source ("mcp" or "fastmcp")
            - error: Error message (only present if fetch failed)
        """
        cache.ensure_ready()
    
        page = cache.ensure_page(uri)
        if page is None:
            return {"error": "fetch failed", "url": uri, "source": _get_source_from_url(uri)}
    
        return {
            "url": page.url,
            "title": page.title,
            "content": page.content,
            "source": _get_source_from_url(page.url),
        }
  • Registration of fetch_mcp_doc as an MCP tool using the FastMCP framework's mcp.tool() decorator.
    # Register tools
    mcp.tool()(docs.search_mcp_docs)
    mcp.tool()(docs.fetch_mcp_doc)
  • Helper function used by fetch_mcp_doc to determine the source ('mcp', 'fastmcp', or 'unknown') from a URL domain.
    def _get_source_from_url(url: str) -> str:
        """Extract source identifier from URL domain."""
        for domain, source in _DOMAIN_SOURCE_MAP.items():
            if domain in url:
                return source
        return "unknown"
  • Cache helper called by fetch_mcp_doc. Fetches the page via doc_fetcher.fetch_and_clean (which validates the URL against allowed domains) and caches it. Returns None on any failure.
    def ensure_page(url: str) -> Page | None:
        """Ensure a page is cached, fetching it if necessary.
    
        Args:
            url: The URL of the page to ensure is cached
    
        Returns:
            The cached or newly fetched Page object, or None if fetch failed
        """
        page = _URL_CACHE.get(url)
        if page is not None:
            return page
    
        try:
            raw = doc_fetcher.fetch_and_clean(url)
            display_title = text_processor.format_display_title(url, raw.title, _URL_TITLES)
            page = Page(url=url, title=display_title, content=raw.content)
            _URL_CACHE[url] = page
            return page
        except Exception:
            return None
  • Helper that fetches a URL, validates it against allowed domains (modelcontextprotocol.io, gofastmcp.com), and cleans the HTML content into plain text.
    def fetch_and_clean(page_url: str) -> Page:
        """Fetch a web page and return cleaned content.
    
        Args:
            page_url: URL of the page to fetch
    
        Returns:
            Page object with URL, title, and cleaned content
    
        Raises:
            URLValidationError: If the URL is not allowed
        """
        validated_url = validate_urls(page_url)[0]
    
        raw = _get(validated_url)
        lower = raw.lower()
    
        # Check if it's HTML content
        if "<html" in lower or "<head" in lower or "<body" in lower:
            extracted_title = _extract_html_title(raw)
            content = _html_to_text(raw)
            title = extracted_title or validated_url.rsplit("/", 1)[-1] or validated_url
            return Page(url=validated_url, title=title, content=content)
        else:
            # Plain text (e.g., markdown)
            title = validated_url.rsplit("/", 1)[-1] or validated_url
            return Page(url=validated_url, title=title, content=raw)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It transparently describes the operation (fetching content), supported sources, return structure, and error handling. It does not mention potential network delays or rate limits, but the tool's read-only nature and simple behavior are well-covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections, bullet points, and clear guidance. It is front-loaded with the core purpose. Minor redundancy in the bullet list could be trimmed, but overall it is readable and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no annotations, output described), the description covers purpose, usage context, parameters, and return values adequately. It lacks details like authentication requirements or URL format validation, but these are not critical for a straightforward fetch tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It explicitly documents the 'uri' parameter, specifies supported domains, and implies it expects HTTP/HTTPS URLs. This adds meaningful context beyond the schema's bare type definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches full document content from specific domains (MCP and FastMCP), with a clear verb-resource pairing. It distinguishes itself from sibling 'search_mcp_docs' by indicating it retrieves full pages rather than snippets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool ('when search snippets aren't sufficient'), lists supported domains, and mentions that URLs can come from search results or be provided directly. It lacks an explicit 'when not to use' statement but strongly implies the alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/praveenc/mcp-server-builder'

If you have feedback or need assistance with the MCP directory API, please join our Discord server