fetch_page_links
Extract all links from a web page by providing its URL. This tool helps identify and collect hyperlinks for web scraping and content analysis.
Instructions
Return a list of all links on the page.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Implementation Reference
- url_text_fetcher/mcp_server.py:21-27 (handler)The main handler function for the 'fetch_page_links' tool. It is decorated with @mcp.tool() for registration, fetches the webpage content using requests, parses it with BeautifulSoup, and extracts all href attributes from anchor tags returning them as a list of strings.@mcp.tool() def fetch_page_links(url: str) -> List[str]: """Return a list of all links on the page.""" resp = requests.get(url, timeout=10) resp.raise_for_status() soup = BeautifulSoup(resp.text, "html.parser") return [a['href'] for a in soup.find_all('a', href=True)]
- url_text_fetcher/mcp_server.py:21-21 (registration)The @mcp.tool() decorator registers the fetch_page_links function as an MCP tool.@mcp.tool()
- url_text_fetcher/mcp_server.py:22-23 (schema)The function signature defines the input schema (url: str) and output schema (List[str]), along with the docstring describing the tool's purpose.def fetch_page_links(url: str) -> List[str]: """Return a list of all links on the page."""