tool_scrape_url
Extract web content from any URL and convert it to structured Markdown with source attribution for efficient integration into development workflows.
Instructions
Scrape content from a URL as Markdown.
Args: url: URL to scrape.
Returns: Markdown content with source attribution.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Implementation Reference
- src/devlens/server.py:54-64 (handler)Tool registration for tool_scrape_url which acts as a wrapper for the scrape_url helper function.
@mcp.tool() async def tool_scrape_url(url: str) -> str: """Scrape content from a URL as Markdown. Args: url: URL to scrape. Returns: Markdown content with source attribution. """ return await scrape_url(url) - src/devlens/tools/scraper.py:12-29 (handler)Core implementation of the scraping logic that calls the underlying ScraperAdapter.
async def scrape_url(url: str, *, include_metadata: bool = False) -> str: """Scrape content from a URL and return as Markdown. Args: url: The URL to scrape. include_metadata: Include page metadata (fetch time, word count, etc.). Returns: Markdown content with source attribution. Example: >>> content = await scrape_url("https://example.com") >>> content = await scrape_url("https://example.com", include_metadata=True) """ doc = await _adapter.fetch(url) if not include_metadata: return doc.content