The Firecrawl MCP Server provides comprehensive web scraping, crawling, and content extraction capabilities through Firecrawl integration:
Web Scraping: Extract content from single or multiple URLs with options for different formats (markdown, HTML, screenshots), dynamic actions, and structured data extraction.
URL Discovery: Map websites from a starting point using sitemaps or HTML link discovery.
Crawling: Perform asynchronous crawls with depth control, path filtering, and webhook notifications.
Batch Operations: Process multiple URLs efficiently with job status tracking.
Search Functionality: Conduct web searches with optional result scraping, supporting language and country filters.
Structured Data Extraction: Use LLMs to extract structured information with customizable prompts and schemas.
Deep Research: Combine crawling, search, and AI analysis for comprehensive web research.
Custom Actions: Execute pre-scraping actions like waiting, clicking, and JavaScript execution for dynamic content.
Error Handling: Built-in retries, rate limiting, and robust error handling.
Monitoring: Track operation status, credit usage, and performance metrics.
Firecrawl MCP Server
A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @knacklabs for the initial implementation!
Features
Web scraping, crawling, and discovery
Search and content extraction
Deep research and batch scraping
Automatic retries and rate limiting
Cloud and self-hosted support
SSE support
Play around with our MCP Server on MCP.so's playground or on Klavis AI.
Installation
Running with npx
Manual Installation
Running on Cursor
Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide
To configure Firecrawl MCP in Cursor v0.48.6
Open Cursor Settings
Go to Features > MCP Servers
Click "+ Add new global MCP server"
Enter the following code:
{ "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }
To configure Firecrawl MCP in Cursor v0.45.6
Open Cursor Settings
Go to Features > MCP Servers
Click "+ Add New MCP Server"
Enter the following:
Name: "firecrawl-mcp" (or your preferred name)
Type: "command"
Command:
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
If you are using Windows and are running into issues, try
cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace your-api-key
with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Running on Windsurf
Add this to your ./codeium/windsurf/model_config.json
:
Running with SSE Local Mode
To run the server using Server-Sent Events (SSE) locally instead of the default stdio transport:
Use the url: http://localhost:3000/sse
Installing via Smithery (Legacy)
To install Firecrawl for Claude Desktop automatically via Smithery:
Running on VS Code
For one-click installation, click one of the install buttons below...
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P
and typing Preferences: Open User Settings (JSON)
.
Optionally, you can add it to a file called .vscode/mcp.json
in your workspace. This will allow you to share the configuration with others:
Configuration
Environment Variables
Required for Cloud API
FIRECRAWL_API_KEY
: Your Firecrawl API keyRequired when using cloud API (default)
Optional when using self-hosted instance with
FIRECRAWL_API_URL
FIRECRAWL_API_URL
(Optional): Custom API endpoint for self-hosted instancesExample:
https://firecrawl.your-domain.com
If not provided, the cloud API will be used (requires API key)
Optional Configuration
Retry Configuration
FIRECRAWL_RETRY_MAX_ATTEMPTS
: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY
: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY
: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR
: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
FIRECRAWL_CREDIT_WARNING_THRESHOLD
: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
: Credit usage critical threshold (default: 100)
Configuration Examples
For cloud API usage with custom retry and credit monitoring:
For self-hosted instance:
Usage with Claude Desktop
Add this to your claude_desktop_config.json
:
System Configuration
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
These configurations control:
Retry Behavior
Automatically retries failed requests due to rate limits
Uses exponential backoff to avoid overwhelming the API
Example: With default settings, retries will be attempted at:
1st retry: 1 second delay
2nd retry: 2 seconds delay
3rd retry: 4 seconds delay (capped at maxDelay)
Credit Usage Monitoring
Tracks API credit consumption for cloud API usage
Provides warnings at specified thresholds
Helps prevent unexpected service interruption
Example: With default settings:
Warning at 1000 credits remaining
Critical alert at 100 credits remaining
Rate Limiting and Batch Processing
The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:
Automatic rate limit handling with exponential backoff
Efficient parallel processing for batch operations
Smart request queuing and throttling
Automatic retries for transient errors
Available Tools
1. Scrape Tool (firecrawl_scrape
)
Scrape content from a single URL with advanced options.
2. Batch Scrape Tool (firecrawl_batch_scrape
)
Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
Response includes operation ID for status checking:
3. Check Batch Status (firecrawl_check_batch_status
)
Check the status of a batch operation.
4. Search Tool (firecrawl_search
)
Search the web and optionally extract content from search results.
5. Crawl Tool (firecrawl_crawl
)
Start an asynchronous crawl with advanced options.
6. Extract Tool (firecrawl_extract
)
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
Example response:
Extract Tool Options:
urls
: Array of URLs to extract information fromprompt
: Custom prompt for the LLM extractionsystemPrompt
: System prompt to guide the LLMschema
: JSON schema for structured data extractionallowExternalLinks
: Allow extraction from external linksenableWebSearch
: Enable web search for additional contextincludeSubdomains
: Include subdomains in extraction
When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.
7. Deep Research Tool (firecrawl_deep_research)
Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
Arguments:
query (string, required): The research question or topic to explore.
maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
Returns:
Final analysis generated by an LLM based on research. (data.finalAnalysis)
May also include structured activities and sources used in the research process.
8. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)
Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
Arguments:
url (string, required): The base URL of the website to analyze.
maxUrls (number, optional): Max number of URLs to include (default: 10).
showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
Returns:
Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)
Logging System
The server includes comprehensive logging:
Operation status and progress
Performance metrics
Credit usage monitoring
Rate limit tracking
Error conditions
Example log messages:
Error Handling
The server provides robust error handling:
Automatic retries for transient errors
Rate limit handling with backoff
Detailed error messages
Credit usage warnings
Network resilience
Example error response:
Development
Contributing
Fork the repository
Create your feature branch
Run tests:
npm test
Submit a pull request
Thanks to contributors
Thanks to @vrknetha, @cawstudios for the initial implementation!
Thanks to MCP.so and Klavis AI for hosting and @gstarwd, @xiangkaiz and @zihaolin96 for integrating our server.
License
MIT License - see LICENSE file for details
local-only server
The server can only run on the client's local machine because it depends on local resources.
A Model Context Protocol (MCP) server implementation that integrates with FireCrawl for advanced web scraping capabilities.
- Features
- Installation
- Configuration
- Available Tools
- 1. Scrape Tool (firecrawl_scrape)
- 2. Batch Scrape Tool (firecrawl_batch_scrape)
- 3. Check Batch Status (firecrawl_check_batch_status)
- 4. Search Tool (firecrawl_search)
- 5. Crawl Tool (firecrawl_crawl)
- 6. Extract Tool (firecrawl_extract)
- 7. Deep Research Tool (firecrawl_deep_research)
- 8. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)
- Logging System
- Error Handling
- Development
- License
Related Resources
Related MCP Servers
- AsecurityAlicenseAqualityA Model Context Protocol (MCP) server that provides search and crawl functionality using Search1API.Last updated -100156MIT License
- AsecurityFlicenseAqualityBuilt as a Model Context Protocol (MCP) server that provides advanced web search, content extraction, web crawling, and scraping capabilities using the Firecrawl API.Last updated -1
- AsecurityAlicenseAqualityA Model Context Protocol server that enables web search, scraping, crawling, and content extraction through multiple engines including SearXNG, Firecrawl, and Tavily.Last updated -44655MIT License
- -securityAlicense-qualityA Model Context Protocol server that enables AI assistants to perform advanced web scraping, crawling, searching, and data extraction through the Firecrawl API.Last updated -28,887MIT License