webpage_scrape
Extract webpage content by URL to retrieve text and optionally include markdown formatting for analysis or processing.
Instructions
Scrape webpage by url
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The url to scrape | |
| includeMarkdown | No | Include markdown in the response (boolean value as string: 'true' or 'false') | false |
Implementation Reference
- src/serper_mcp_server/core.py:20-22 (handler)Core handler function for webpage_scrape tool. Posts the request to the scrape.serper.dev API endpoint.async def scape(request: WebpageRequest) -> Dict[str, Any]: url = "https://scrape.serper.dev" return await fetch_json(url, request)
- src/serper_mcp_server/server.py:68-71 (handler)Dispatch handler in call_tool for the webpage_scrape tool, validates input and invokes scape.if name == SerperTools.WEBPAGE_SCRAPE.value: request = WebpageRequest(**arguments) result = await scape(request) return [TextContent(text=json.dumps(result, indent=2), type="text")]
- Pydantic schema for input validation of webpage_scrape tool.class WebpageRequest(BaseModel): url: str = Field(..., description="The url to scrape") includeMarkdown: Optional[str] = Field( "false", pattern=r"^(true|false)$", description="Include markdown in the response (boolean value as string: 'true' or 'false')", )
- src/serper_mcp_server/server.py:54-58 (registration)Registration of the webpage_scrape tool in the MCP list_tools handler.tools.append(Tool( name=SerperTools.WEBPAGE_SCRAPE, description="Scrape webpage by url", inputSchema=WebpageRequest.model_json_schema(), ))
- src/serper_mcp_server/enums.py:17-17 (helper)Enum definition providing the tool name constant.WEBPAGE_SCRAPE = "webpage_scrape"