Scrapling Fetch MCP
This MCP server enables AI assistants to retrieve text content from bot-protected websites and extract specific information using regex patterns.
Core Capabilities:
Web Page Fetching: Retrieve complete web pages with pagination support, optimized for text-based documentation and reference materials
Pattern Extraction: Search and extract specific content using regular expressions with configurable context around matches
Bot Detection Bypass: Three protection modes (basic, stealth, max-stealth) that automatically escalate when sites block access
Flexible Output: Content delivered in HTML or Markdown format with configurable length limits and continuation from specific positions
Intelligent Integration: Claude automatically selects appropriate tools based on natural language requests without requiring technical commands
Primarily designed for low-volume retrieval of documentation, articles, and reference materials from websites that implement bot detection.
Enables installation of the MCP server through PyPI's package repository, with version tracking and dependency management.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Scrapling Fetch MCPfetch the API documentation from https://docs.example.com/api/v2"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
scrapling-fetch-mcp
Helps AI assistants fetch content from bot-protected websites. Uses Scrapling (patchright + curl-cffi) to bypass anti-automation measures, returning clean HTML or Markdown.
Optimized for low-volume retrieval of documentation and reference materials. Not designed for high-volume scraping or data harvesting.
Requirements: Python 3.10+, uv
Claude Code Skill
The easiest way to use this is as a Claude Code skill. Once installed, Claude will automatically fetch bot-protected URLs when you ask — no manual commands needed.
Install into your project (recommended — only loads in this project's context):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch .claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup .claude/skills/
rm -rf /tmp/scrapling-fetch-mcpOr install for all projects (loads into context everywhere):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch ~/.claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup ~/.claude/skills/
rm -rf /tmp/scrapling-fetch-mcpThen ask Claude to run /s-fetch-setup — it will install the tool and browser binaries (large download), then remove itself. After that, just ask naturally:
"Fetch the docs at https://example.com/api"
"Find all mentions of 'authentication' on that page"
"Get me the installation instructions from their homepage"Related MCP server: browser-use MCP Server
Claude Desktop (MCP Server)
If you've already run /s-fetch-setup, the tool is installed — skip to the config below.
Otherwise install first:
uv tool install git+https://github.com/cyberchitta/scrapling-fetch-mcp
uvx --from git+https://github.com/cyberchitta/scrapling-fetch-mcp scrapling installNote: Browser installation downloads hundreds of MB and must complete before first use. If the server times out initially, wait a few minutes and try again.
Add this to your Claude Desktop MCP settings and restart:
MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"scrapling-fetch": {
"command": "uvx",
"args": ["scrapling-fetch-mcp"]
}
}
}How It Works
Two tools, used automatically by Claude:
Page fetching — retrieves complete pages with pagination support
Pattern extraction — finds content matching a regex
Three protection levels, escalated automatically:
basic — fast (1-2s), works for most sites
stealth — moderate (3-8s), headless Chromium
max-stealth — thorough (10s+), full browser fingerprint
Limitations
Text content only (documentation, articles, references)
Not for high-volume scraping or sites requiring authentication
Performance varies by site complexity and protection level
License
Apache 2.0
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Tools
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/cyberchitta/scrapling-fetch-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server