mcp-server-firecrawl
Firecrawl MCP Server
A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @cawstudios for the initial implementation!
Features
- Scrape, crawl, search, extract and batch scrape support
- Web scraping with JS rendering
- URL discovery and crawling
- Web search with content extraction
- Automatic retries with exponential backoff
- Efficient batch processing with built-in rate limiting
- Credit usage monitoring for cloud API
- Comprehensive logging system
- Support for cloud and self-hosted FireCrawl instances
- Mobile/Desktop viewport support
- Smart content filtering with tag inclusion/exclusion
Installation
Running with npx
Manual Installation
Running on Cursor
Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+
To configure FireCrawl MCP in Cursor:
- Open Cursor Settings
- Go to Features > MCP Servers
- Click "+ Add New MCP Server"
- Enter the following:
- Name: "firecrawl-mcp" (or your preferred name)
- Type: "command"
- Command:
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
Replace your-api-key
with your FireCrawl API key.
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use FireCrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Installing via Smithery (Legacy)
To install FireCrawl for Claude Desktop automatically via Smithery:
Configuration
Environment Variables
Required for Cloud API
FIRECRAWL_API_KEY
: Your FireCrawl API key- Required when using cloud API (default)
- Optional when using self-hosted instance with
FIRECRAWL_API_URL
FIRECRAWL_API_URL
(Optional): Custom API endpoint for self-hosted instances- Example:
https://firecrawl.your-domain.com
- If not provided, the cloud API will be used (requires API key)
- Example:
Optional Configuration
Retry Configuration
FIRECRAWL_RETRY_MAX_ATTEMPTS
: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY
: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY
: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR
: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
FIRECRAWL_CREDIT_WARNING_THRESHOLD
: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
: Credit usage critical threshold (default: 100)
Configuration Examples
For cloud API usage with custom retry and credit monitoring:
For self-hosted instance:
Usage with Claude Desktop
Add this to your claude_desktop_config.json
:
System Configuration
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
These configurations control:
- Retry Behavior
- Automatically retries failed requests due to rate limits
- Uses exponential backoff to avoid overwhelming the API
- Example: With default settings, retries will be attempted at:
- 1st retry: 1 second delay
- 2nd retry: 2 seconds delay
- 3rd retry: 4 seconds delay (capped at maxDelay)
- Credit Usage Monitoring
- Tracks API credit consumption for cloud API usage
- Provides warnings at specified thresholds
- Helps prevent unexpected service interruption
- Example: With default settings:
- Warning at 1000 credits remaining
- Critical alert at 100 credits remaining
Rate Limiting and Batch Processing
The server utilizes FireCrawl's built-in rate limiting and batch processing capabilities:
- Automatic rate limit handling with exponential backoff
- Efficient parallel processing for batch operations
- Smart request queuing and throttling
- Automatic retries for transient errors
Available Tools
1. Scrape Tool (firecrawl_scrape
)
Scrape content from a single URL with advanced options.
2. Batch Scrape Tool (firecrawl_batch_scrape
)
Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
Response includes operation ID for status checking:
3. Check Batch Status (firecrawl_check_batch_status
)
Check the status of a batch operation.
4. Search Tool (firecrawl_search
)
Search the web and optionally extract content from search results.
5. Crawl Tool (firecrawl_crawl
)
Start an asynchronous crawl with advanced options.
6. Extract Tool (firecrawl_extract
)
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
Example response:
Extract Tool Options:
urls
: Array of URLs to extract information fromprompt
: Custom prompt for the LLM extractionsystemPrompt
: System prompt to guide the LLMschema
: JSON schema for structured data extractionallowExternalLinks
: Allow extraction from external linksenableWebSearch
: Enable web search for additional contextincludeSubdomains
: Include subdomains in extraction
When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses FireCrawl's managed LLM service.
Logging System
The server includes comprehensive logging:
- Operation status and progress
- Performance metrics
- Credit usage monitoring
- Rate limit tracking
- Error conditions
Example log messages:
Error Handling
The server provides robust error handling:
- Automatic retries for transient errors
- Rate limit handling with backoff
- Detailed error messages
- Credit usage warnings
- Network resilience
Example error response:
Development
Contributing
- Fork the repository
- Create your feature branch
- Run tests:
npm test
- Submit a pull request
License
MIT License - see LICENSE file for details
You must be authenticated.
A Model Context Protocol (MCP) server implementation that integrates with FireCrawl for advanced web scraping capabilities.