ScraperAPI MCP server
The ScraperAPI MCP server enables LLM clients to retrieve and process web scraping requests using the ScraperAPI services.
Table of Contents
Features
Full implementation of the Model Context Protocol specification
Seamless integration with ScraperAPI for web scraping
Simple setup with Python or Docker
Architecture
┌───────────────┐ ┌───────────────────────┐ ┌───────────────┐
│ LLM Client │────▶│ Scraper MCP Server │────▶│ AI Model │
└───────────────┘ └───────────────────────┘ └───────────────┘
│
▼
┌──────────────────┐
│ ScraperAPI API │
└──────────────────┘Installation
The ScraperAPI MCP Server is designed to run as a local server on your machine, your LLM client will launch it automatically when configured.
Prerequisites
Python 3.11+
Docker (optional)
Using Python
Install the package:
pip install scraperapi-mcp-serverAdd this to your client configuration file:
{
"mcpServers": {
"ScraperAPI": {
"command": "python",
"args": ["-m", "scraperapi_mcp_server"],
"env": {
"API_KEY": "<YOUR_SCRAPERAPI_API_KEY>"
}
}
}
}Using Docker
Add this to your client configuration file:
{
"mcpServers": {
"ScraperAPI": {
"command": "docker",
"args": [
"run",
"-i",
"-e",
"API_KEY=${API_KEY}",
"--rm",
"scraperapi-mcp-server"]
}
}
}If your command is not working (for example, you see a package not found error when trying to start the server), double-check the path you are using. To find the correct path, activate your virtual environment first, then run:
which <YOUR_COMMAND>API Reference
Available Tools
scrapeScrape a URL from the internet using ScraperAPI
Parameters:
url(string, required): URL to scraperender(boolean, optional): Whether to render the page using JavaScript. Defaults toFalse. Set toTrueonly if the page requires JavaScript rendering to display its content.country_code(string, optional): Activate country geotargeting (ISO 2-letter code)premium(boolean, optional): Activate premium residential and mobile IPsultra_premium(boolean, optional): Activate advanced bypass mechanisms. Can not combine withpremiumdevice_type(string, optional): Set request to usemobileordesktopuser agentsoutput_format(string, optional): Allows you to instruct the API on what the response file type should be.autoparse(boolean, optional): Activate auto parsing for select websites. Defaults toFalse. Set toTrueonly if you want the output format incsvorjson.
Returns: The scraped content as a string
Prompt templates
Please scrape this URL
<URL>. If you receive a 500 server error identify the website's geo-targeting and add the corresponding country_code to overcome geo-restrictions. If errors continues, upgrade the request to use premium proxies by adding premium=true. For persistent failures, activate ultra_premium=true to use enhanced anti-blocking measures.Can you scrape URL
<URL>to extract<SPECIFIC_DATA>? If the request returns missing/incomplete<SPECIFIC_DATA>, set render=true to enable JS Rendering.
Configuration
Settings
API_KEY: Your ScraperAPI API key.
Configure Claude Desktop App & Claude Code
Claude Desktop:
Open Claude Desktop and click the settings icon
Select the "Developer" tab
Click "Edit Config" and paste the JSON configuration file
Claude Code:
Add the server manually to your
.claude/settings.jsonwith the JSON configuration file, or run:claude mcp add scraperapi -e API_KEY=<YOUR_SCRAPERAPI_API_KEY> -- python -m scraperapi_mcp_server
Configure Cursor Editor
Open Cursor
Access the Settings Menu
Open Cursor Settings
Go to Tools & Integrations section
Click '+ Add MCP Server'
Choose Manual and paste the JSON configuration file
More here
Configure Windsurf Editor
Open Windsurf
Access the Settings Menu
Click on the Cascade settings
Click on the MCP server section
Click on the gear icon, the
mcp_config.jsonfile will open
More here
Configure Cline (VS code extension)
Open VS Code and click the Cline icon in the activity bar to open the Cline panel
Click the MCP Servers icon in the top navigation bar of the Cline pane
Select the "Configure" tab
Click "Configure MCP Servers" at the bottom of the pane — this opens
cline_mcp_settings.json
More here
Development
Local setup
Clone the repository:
git clone https://github.com/scraperapi/scraperapi-mcp cd scraperapi-mcpInstall dependencies:
Using Poetry:
poetry installUsing pip:
# Create virtual environment and activate it python -m venv .venv source .venv/bin/activate # MacOS/Linux # OR .venv/Scripts/activate # Windows # Install the local package in editable mode pip install -e .Using Docker:
# Build the Docker image locally docker build -t scraperapi-mcp-server .
Run the server
Using Python:
python -m scraperapi_mcp_serverUsing Docker:
# Run the Docker container with your API key docker run -e API_KEY=<YOUR_SCRAPERAPI_API_KEY> scraperapi-mcp-server
Debug
python3 -m scraperapi_mcp_server --debugTesting
This project uses pytest for testing.
Install Test Dependencies
Using Poetry:
poetry install --with devUsing pip:
pip install -e . pip install pytest pytest-mock pytest-asyncio
Running Tests
# Run All Tests
pytest
# Run Specific Test
pytest <TEST_FILE_PATH>This server cannot be installed
Resources
Looking for Admin?
Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.