fetch_mcp_doc
Fetch full documentation content from MCP protocol or FastMCP framework URLs. Get complete protocol specs, tutorials, and instructions when search snippets are insufficient.
Instructions
Fetch full document content by URL from MCP protocol or FastMCP framework docs.
Retrieves complete documentation content from URLs found via search_mcp_docs or provided directly. Works with both documentation sources:
Supported domains:
modelcontextprotocol.io - Official MCP protocol specification
gofastmcp.com - FastMCP Python framework documentation
Use this to get full documentation pages when search snippets aren't sufficient, including:
Complete protocol specifications and API references
Full tutorial and example code
Configuration, authentication, and deployment instructions
Args: uri: Document URI (http/https URLs from supported domains)
Returns: Dictionary containing: - url: Canonical document URL - title: Document title - content: Full document text content - source: Documentation source ("mcp" or "fastmcp") - error: Error message (only present if fetch failed)
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| uri | Yes |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- The main handler function for the fetch_mcp_doc tool. It takes a URI, ensures the cache is ready, calls cache.ensure_page() to fetch and cache the document (with domain validation via url_validator), then returns a dict with url, title, content, and source (or an error if fetch fails).
def fetch_mcp_doc(uri: str) -> dict[str, Any]: """Fetch full document content by URL from MCP protocol or FastMCP framework docs. Retrieves complete documentation content from URLs found via search_mcp_docs or provided directly. Works with both documentation sources: **Supported domains:** - modelcontextprotocol.io - Official MCP protocol specification - gofastmcp.com - FastMCP Python framework documentation Use this to get full documentation pages when search snippets aren't sufficient, including: - Complete protocol specifications and API references - Full tutorial and example code - Configuration, authentication, and deployment instructions Args: uri: Document URI (http/https URLs from supported domains) Returns: Dictionary containing: - url: Canonical document URL - title: Document title - content: Full document text content - source: Documentation source ("mcp" or "fastmcp") - error: Error message (only present if fetch failed) """ cache.ensure_ready() page = cache.ensure_page(uri) if page is None: return {"error": "fetch failed", "url": uri, "source": _get_source_from_url(uri)} return { "url": page.url, "title": page.title, "content": page.content, "source": _get_source_from_url(page.url), } - src/mcp_server_builder/server.py:14-16 (registration)Registration of fetch_mcp_doc as an MCP tool using the FastMCP framework's mcp.tool() decorator.
# Register tools mcp.tool()(docs.search_mcp_docs) mcp.tool()(docs.fetch_mcp_doc) - Helper function used by fetch_mcp_doc to determine the source ('mcp', 'fastmcp', or 'unknown') from a URL domain.
def _get_source_from_url(url: str) -> str: """Extract source identifier from URL domain.""" for domain, source in _DOMAIN_SOURCE_MAP.items(): if domain in url: return source return "unknown" - Cache helper called by fetch_mcp_doc. Fetches the page via doc_fetcher.fetch_and_clean (which validates the URL against allowed domains) and caches it. Returns None on any failure.
def ensure_page(url: str) -> Page | None: """Ensure a page is cached, fetching it if necessary. Args: url: The URL of the page to ensure is cached Returns: The cached or newly fetched Page object, or None if fetch failed """ page = _URL_CACHE.get(url) if page is not None: return page try: raw = doc_fetcher.fetch_and_clean(url) display_title = text_processor.format_display_title(url, raw.title, _URL_TITLES) page = Page(url=url, title=display_title, content=raw.content) _URL_CACHE[url] = page return page except Exception: return None - Helper that fetches a URL, validates it against allowed domains (modelcontextprotocol.io, gofastmcp.com), and cleans the HTML content into plain text.
def fetch_and_clean(page_url: str) -> Page: """Fetch a web page and return cleaned content. Args: page_url: URL of the page to fetch Returns: Page object with URL, title, and cleaned content Raises: URLValidationError: If the URL is not allowed """ validated_url = validate_urls(page_url)[0] raw = _get(validated_url) lower = raw.lower() # Check if it's HTML content if "<html" in lower or "<head" in lower or "<body" in lower: extracted_title = _extract_html_title(raw) content = _html_to_text(raw) title = extracted_title or validated_url.rsplit("/", 1)[-1] or validated_url return Page(url=validated_url, title=title, content=content) else: # Plain text (e.g., markdown) title = validated_url.rsplit("/", 1)[-1] or validated_url return Page(url=validated_url, title=title, content=raw)