Provides web search capabilities through Baidu search engine with options to scrape and extract structured content from search results
Enables fast, lightweight web scraping of static content using server-side HTML parsing for efficient data extraction
Provides web search capabilities through Google search engine with configurable parameters like language, country, and safe search, plus automatic scraping of search results
Supports integrated Nginx proxy functionality for cloud deployment modes and enhanced web server capabilities
Enables Chrome/Chromium-based web scraping and automation with JavaScript rendering capabilities for dynamic content extraction
AnyCrawl MCP Server
🚀 AnyCrawl MCP Server — Powerful web scraping and crawling for Cursor, Claude, and other LLM clients via the Model Context Protocol (MCP).
Features
Web Scraping: Extract content from single URLs with multiple output formats
Website Crawling: Crawl entire websites with configurable depth and limits
Search Engine Integration: Search the web and optionally scrape results
Multiple Engines: Support for Playwright, Cheerio, and Puppeteer
Flexible Output: Markdown, HTML, text, screenshots, and structured JSON
Async Operations: Non-blocking crawl jobs with status monitoring
Error Handling: Robust error handling and logging
Multiple Modes: Support for STDIO and Cloud modes with integrated Nginx proxy
Installation
Running with npx
Manual installation
Configuration
Set the required environment variable:
Optionally set a custom base URL:
Get your API key
Visit the AnyCrawl website and sign up or log in: AnyCrawl
🎉 Sign up for free to receive 1,500 credits — enough to crawl nearly 1,500 pages.
Open the dashboard → API Keys → Copy your key.
Copy the key and set it as the
ANYCRAWL_API_KEY
environment variable (see above).
Usage
Available Modes
AnyCrawl MCP Server supports the following deployment modes:
Mode | Description | Best For | Transport |
| Streamable HTTP (JSON, stateful) | Cursor (streamable_http), API integration | HTTP + JSON |
| Server-Sent Events | Web apps, browser integrations | HTTP + SSE |
| Start MCP and SSE in one container | Cloud/service deploy with Nginx frontend | HTTP + JSON/SSE |
Quick Start Commands
Docker Compose (MCP + SSE with Nginx)
This repo ships a production-ready image that runs MCP (JSON) on port 3000 and SSE on port 3001 in the same container, fronted by Nginx. Nginx also supports API-key-prefixed paths /{API_KEY}/mcp
and /{API_KEY}/sse
and forwards the key via x-anycrawl-api-key
header.
Environment variables used in Docker image:
ANYCRAWL_MODE
:MCP_AND_SSE
(default in compose), orMCP
,SSE
ANYCRAWL_MCP_PORT
: default3000
ANYCRAWL_SSE_PORT
: default3001
CLOUD_SERVICE
:true
to extract API key from/{API_KEY}/...
or headersANYCRAWL_BASE_URL
: defaulthttps://api.anycrawl.dev
Running on Cursor
Configuring Cursor. Note: Requires Cursor v0.45.6+.
For Cursor v0.48.6 and newer, add this to your MCP Servers settings:
For Cursor v0.45.6:
Open Cursor Settings → Features → MCP Servers → "+ Add New MCP Server"
Name: "anycrawl-mcp" (or your preferred name)
Type: "command"
Command:
On Windows, if you encounter issues:
Running on VS Code
For manual installation, add this JSON to your User Settings (JSON) in VS Code (Command Palette → Preferences: Open User Settings (JSON)):
Optionally, place the following in .vscode/mcp.json
in your workspace to share config:
Running on Windsurf
Add this to ./codeium/windsurf/model_config.json
:
Running with SSE Server Mode
The SSE (Server-Sent Events) mode provides a web-based interface for MCP communication, ideal for web applications, testing, and integration with web-based LLM clients.
Quick Start
Server Configuration
Optional server settings (defaults shown):
Available Endpoints
GET - Health check endpoint
GET - SSE connection endpoint for MCP clients
POST - Message handling endpoint for SSE clients
Health Check
MCP Client Configuration
The SSE server provides a web-based MCP interface that can be used with various MCP clients.
Available Endpoints:
GET /sse
- SSE connection endpoint for MCP clientsPOST /messages
- Message handling endpoint for SSE clientsGET /health
- Health check endpoint
Cursor Configuration for SSE Mode
For Cursor v0.48.6 and newer, you can configure Cursor to connect to the SSE server:
Note: The API key is set when starting the server, not in the Cursor configuration.
Generic MCP Client Configuration
For other MCP clients that support SSE transport, use this configuration:
Environment Setup:
Key Features
Web-based MCP interface for easy integration
Real-time communication with Server-Sent Events
CORS-enabled for cross-origin requests
Health monitoring with built-in endpoints
Session management with automatic ID handling
Running with HTTP Streamable Server (stateful)
Run the HTTP server that maintains MCP sessions via Mcp-Session-Id
header.
Optional server settings (defaults shown):
Health check:
Initialize MCP session (expects Mcp-Session-Id
in response headers):
Open SSE stream using the returned session id:
Cursor configuration for HTTP modes (streamable_http)
Configure Cursor to connect to your HTTP MCP server.
Local HTTP Streamable Server:
Cloud/Remote deployment:
Note: For HTTP modes, set ANYCRAWL_API_KEY
(and optional host/port) in the server process environment. Cursor does not need your API key when using streamable_http
.
Available Tools
1. Scrape Tool (anycrawl_scrape
)
Scrape a single URL and extract content in various formats.
Best for:
Extracting content from a single page
Quick data extraction
Testing specific URLs
Parameters:
url
(required): The URL to scrapeengine
(required): Scraping engine (playwright
,cheerio
,puppeteer
)formats
(optional): Output formats (markdown
,html
,text
,screenshot
,screenshot@fullPage
,rawHtml
,json
)proxy
(optional): Proxy URLtimeout
(optional): Timeout in milliseconds (default: 300000)retry
(optional): Whether to retry on failure (default: false)wait_for
(optional): Wait time for page to loadinclude_tags
(optional): HTML tags to includeexclude_tags
(optional): HTML tags to excludejson_options
(optional): Options for JSON extraction
Example:
2. Crawl Tool (anycrawl_crawl
)
Start a crawl job to scrape multiple pages from a website. By default this waits for completion and returns aggregated results using the SDK's client.crawl
(defaults: poll every 3 seconds, timeout after 60 seconds).
Best for:
Extracting content from multiple related pages
Comprehensive website analysis
Bulk data collection
Parameters:
url
(required): The base URL to crawlengine
(required): Scraping enginemax_depth
(optional): Maximum crawl depth (default: 10)limit
(optional): Maximum number of pages (default: 100)strategy
(optional): Crawling strategy (all
,same-domain
,same-hostname
,same-origin
)exclude_paths
(optional): URL patterns to excludeinclude_paths
(optional): URL patterns to includescrape_options
(optional): Options for individual page scrapingpoll_seconds
(optional): Poll interval seconds for waiting (default: 3)timeout_ms
(optional): Overall timeout milliseconds for waiting (default: 60000)
Example:
Returns: { "job_id": "...", "status": "completed", "total": N, "completed": N, "creditsUsed": N, "data": [...] }
.
3. Crawl Status Tool (anycrawl_crawl_status
)
Check the status of a crawl job.
Parameters:
job_id
(required): The crawl job ID
Example:
4. Crawl Results Tool (anycrawl_crawl_results
)
Get results from a crawl job.
Parameters:
job_id
(required): The crawl job IDskip
(optional): Number of results to skip (for pagination)
Example:
5. Cancel Crawl Tool (anycrawl_cancel_crawl
)
Cancel a pending crawl job.
Parameters:
job_id
(required): The crawl job ID to cancel
Example:
6. Search Tool (anycrawl_search
)
Search the web using AnyCrawl search engine.
Best for:
Finding specific information across multiple websites
Research and discovery
When you don't know which website has the information
Parameters:
query
(required): Search queryengine
(optional): Search engine (google
)limit
(optional): Maximum number of results (default: 10)offset
(optional): Number of results to skip (default: 0)pages
(optional): Number of pages to searchlang
(optional): Language codecountry
(optional): Country codescrape_options
(required): Options for scraping search resultssafeSearch
(optional): Safe search level (0=off, 1=moderate, 2=strict)
Example:
Output Formats
Markdown
Clean, structured markdown content perfect for LLM consumption.
HTML
Raw HTML content with all formatting preserved.
Text
Plain text content with minimal formatting.
Screenshot
Visual screenshot of the page.
Screenshot@fullPage
Full-page screenshot including content below the fold.
Raw HTML
Unprocessed HTML content.
JSON
Structured data extraction using custom schemas.
Engines
Cheerio
Fast and lightweight
Good for static content
Server-side rendering
Playwright
Full browser automation
JavaScript rendering
Best for dynamic content
Puppeteer
Chrome/Chromium automation
Good balance of features and performance
Error Handling
The server provides comprehensive error handling:
Validation Errors: Invalid parameters or missing required fields
API Errors: AnyCrawl API errors with detailed messages
Network Errors: Connection and timeout issues
Rate Limiting: Automatic retry with backoff
Logging
The server includes detailed logging:
Debug: Detailed operation information
Info: General operation status
Warn: Non-critical issues
Error: Critical errors and failures
Set log level with environment variable:
Development
Prerequisites
Node.js 18+
npm
Setup
Build
Test
Lint
Format
Contributing
Fork the repository
Create your feature branch
Run tests:
npm test
Submit a pull request
License
MIT License - see LICENSE file for details
Support
GitHub Issues: Report bugs or request features
Documentation: AnyCrawl API Docs
Email: help@anycrawl.dev
About AnyCrawl
AnyCrawl is a powerful Node.js/TypeScript crawler that turns websites into LLM-ready data and extracts structured SERP results from Google/Bing/Baidu/etc. It features native multi-threading for bulk processing and supports multiple output formats.
Website: https://anycrawl.dev
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables web scraping and crawling capabilities for LLM clients, supporting single-page scraping, multi-page website crawling, and web search with multiple engines (Playwright, Cheerio, Puppeteer) and flexible output formats including markdown, HTML, text, and screenshots.