fetch_page
Fetch dynamic web page content after JavaScript rendering and convert it to Markdown format using chunked streaming for large pages.
Instructions
Fetch/Crawl a dynamic web page and convert to Markdown (supports JavaScript).
Use this tool to:
Crawl/Scrape content from modern web pages (React, Vue, etc.)
Get full page content after JavaScript rendering
Download large page content via chunked streaming
Protocol:
Start: Provide url (required) → returns transfer_id + first chunk
Continue: Provide transfer_id + offset → returns next chunk
Args:
url: Target http(s) URL (required for phase 1)
to_markdown: Convert HTML to Markdown (default: True)
wait_selector: CSS selector to wait for before capturing content
Optional: headers, query, timeout_ms, max_scrolls, min/max_delay_ms, proxy/pool, user_agent, chunk_bytes
Cursor: transfer_id, offset (for phase 2)
Returns:
Chunk: chunk_text or chunk_base64, next_offset, done, truncated
Meta: transfer_id, status, headers, final_url, content_type, elapsed_ms
Size: available_bytes, total_bytes
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | No | ||
| headers | No | ||
| query | No | ||
| timeout_ms | No | ||
| to_markdown | No | ||
| wait_selector | No | ||
| max_scrolls | No | ||
| min_delay_ms | No | ||
| max_delay_ms | No | ||
| proxy | No | ||
| proxy_pool | No | ||
| user_agent | No | ||
| chunk_bytes | No | ||
| transfer_id | No | ||
| offset | No |