The Firecrawl MCP Server provides comprehensive web scraping, crawling, and content extraction capabilities through Firecrawl integration:
Web Scraping: Extract content from single or multiple URLs with options for different formats (markdown, HTML, screenshots), dynamic actions, and structured data extraction.
URL Discovery: Map websites from a starting point using sitemaps or HTML link discovery.
Crawling: Perform asynchronous crawls with depth control, path filtering, and webhook notifications.
Batch Operations: Process multiple URLs efficiently with job status tracking.
Search Functionality: Conduct web searches with optional result scraping, supporting language and country filters.
Structured Data Extraction: Use LLMs to extract structured information with customizable prompts and schemas.
Deep Research: Combine crawling, search, and AI analysis for comprehensive web research.
Custom Actions: Execute pre-scraping actions like waiting, clicking, and JavaScript execution for dynamic content.
Error Handling: Built-in retries, rate limiting, and robust error handling.
Monitoring: Track operation status, credit usage, and performance metrics.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@mcp-server-firecrawlscrape the latest AI news from TechCrunch"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Firecrawl MCP Server
A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @knacklabs for the initial implementation!
Features
Web scraping, crawling, and discovery
Search and content extraction
Deep research and batch scraping
Automatic retries and rate limiting
Cloud and self-hosted support
SSE support
Play around with our MCP Server on MCP.so's playground or on Klavis AI.
Related MCP server: WebSearch
Installation
Running with npx
Manual Installation
Running on Cursor
Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide
To configure Firecrawl MCP in Cursor v0.48.6
Open Cursor Settings
Go to Features > MCP Servers
Click "+ Add new global MCP server"
Enter the following code:
{ "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }
To configure Firecrawl MCP in Cursor v0.45.6
Open Cursor Settings
Go to Features > MCP Servers
Click "+ Add New MCP Server"
Enter the following:
Name: "firecrawl-mcp" (or your preferred name)
Type: "command"
Command:
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
If you are using Windows and are running into issues, try
cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace your-api-key with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Running on Windsurf
Add this to your ./codeium/windsurf/model_config.json:
Running with Streamable HTTP Local Mode
To run the server using Streamable HTTP locally instead of the default stdio transport:
Use the url: http://localhost:3000/mcp
Installing via Smithery (Legacy)
To install Firecrawl for Claude Desktop automatically via Smithery:
Running on VS Code
For one-click installation, click one of the install buttons below...
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and typing Preferences: Open User Settings (JSON).
Optionally, you can add it to a file called .vscode/mcp.json in your workspace. This will allow you to share the configuration with others:
Configuration
Environment Variables
Required for Cloud API
FIRECRAWL_API_KEY: Your Firecrawl API keyRequired when using cloud API (default)
Optional when using self-hosted instance with
FIRECRAWL_API_URL
FIRECRAWL_API_URL(Optional): Custom API endpoint for self-hosted instancesExample:
https://firecrawl.your-domain.comIf not provided, the cloud API will be used (requires API key)
Optional Configuration
Retry Configuration
FIRECRAWL_RETRY_MAX_ATTEMPTS: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
FIRECRAWL_CREDIT_WARNING_THRESHOLD: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD: Credit usage critical threshold (default: 100)
Configuration Examples
For cloud API usage with custom retry and credit monitoring:
For self-hosted instance:
Usage with Claude Desktop
Add this to your claude_desktop_config.json:
System Configuration
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
These configurations control:
Retry Behavior
Automatically retries failed requests due to rate limits
Uses exponential backoff to avoid overwhelming the API
Example: With default settings, retries will be attempted at:
1st retry: 1 second delay
2nd retry: 2 seconds delay
3rd retry: 4 seconds delay (capped at maxDelay)
Credit Usage Monitoring
Tracks API credit consumption for cloud API usage
Provides warnings at specified thresholds
Helps prevent unexpected service interruption
Example: With default settings:
Warning at 1000 credits remaining
Critical alert at 100 credits remaining
Rate Limiting and Batch Processing
The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:
Automatic rate limit handling with exponential backoff
Efficient parallel processing for batch operations
Smart request queuing and throttling
Automatic retries for transient errors
How to Choose a Tool
Use this guide to select the right tool for your task:
If you know the exact URL(s) you want:
For one: use scrape (with JSON format for structured data)
For many: use batch_scrape
If you need to discover URLs on a site: use map
If you want to search the web for info: use search
If you need complex research across multiple unknown sources: use agent
If you want to analyze a whole site or section: use crawl (with limits!)
Quick Reference Table
Tool | Best for | Returns |
scrape | Single page content | JSON (preferred) or markdown |
batch_scrape | Multiple known URLs | JSON (preferred) or markdown[] |
map | Discovering URLs on a site | URL[] |
crawl | Multi-page extraction (with limits) | markdown/html[] |
search | Web search for info | results[] |
agent | Complex multi-source research | JSON (structured data) |
Format Selection Guide
When using scrape or batch_scrape, choose the right format:
JSON format (recommended for most cases): Use when you need specific data from a page. Define a schema based on what you need to extract. This keeps responses small and avoids context window overflow.
Markdown format (use sparingly): Only when you genuinely need the full page content, such as reading an entire article for summarization or analyzing page structure.
Available Tools
1. Scrape Tool (firecrawl_scrape)
Scrape content from a single URL with advanced options.
Best for:
Single page content extraction, when you know exactly which page contains the information.
Not recommended for:
Extracting content from multiple pages (use batch_scrape for known URLs, or map + batch_scrape to discover URLs first, or crawl for full page content)
When you're unsure which page contains the information (use search)
Common mistakes:
Using scrape for a list of URLs (use batch_scrape instead).
Using markdown format by default (use JSON format to extract only what you need).
Choosing the right format:
JSON format (preferred): For most use cases, use JSON format with a schema to extract only the specific data needed. This keeps responses focused and prevents context window overflow.
Markdown format: Only when the task genuinely requires full page content (e.g., summarizing an entire article, analyzing page structure).
Prompt Example:
"Get the product details from https://example.com/product."
Usage Example (JSON format - preferred):
Usage Example (markdown format - when full content needed):
Usage Example (branding format - extract brand identity):
Branding format: Extracts comprehensive brand identity (colors, fonts, typography, spacing, logo, UI components) for design analysis or style replication.
Returns:
JSON structured data, markdown, branding profile, or other formats as specified.
2. Batch Scrape Tool (firecrawl_batch_scrape)
Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
Best for:
Retrieving content from multiple pages, when you know exactly which pages to scrape.
Not recommended for:
Discovering URLs (use map first if you don't know the URLs)
Scraping a single page (use scrape)
Common mistakes:
Using batch_scrape with too many URLs at once (may hit rate limits or token overflow)
Prompt Example:
"Get the content of these three blog posts: [url1, url2, url3]."
Usage Example:
Returns:
Response includes operation ID for status checking:
3. Check Batch Status (firecrawl_check_batch_status)
Check the status of a batch operation.
4. Map Tool (firecrawl_map)
Map a website to discover all indexed URLs on the site.
Best for:
Discovering URLs on a website before deciding what to scrape
Finding specific sections of a website
Not recommended for:
When you already know which specific URL you need (use scrape or batch_scrape)
When you need the content of the pages (use scrape after mapping)
Common mistakes:
Using crawl to discover URLs instead of map
Prompt Example:
"List all URLs on example.com."
Usage Example:
Returns:
Array of URLs found on the site
5. Search Tool (firecrawl_search)
Search the web and optionally extract content from search results.
Best for:
Finding specific information across multiple websites, when you don't know which website has the information.
When you need the most relevant content for a query
Not recommended for:
When you already know which website to scrape (use scrape)
When you need comprehensive coverage of a single website (use map or crawl)
Common mistakes:
Using crawl or map for open-ended questions (use search instead)
Usage Example:
Returns:
Array of search results (with optional scraped content)
Prompt Example:
"Find the latest research papers on AI published in 2023."
6. Crawl Tool (firecrawl_crawl)
Starts an asynchronous crawl job on a website and extract content from all pages.
Best for:
Extracting content from multiple related pages, when you need comprehensive coverage.
Not recommended for:
Extracting content from a single page (use scrape)
When token limits are a concern (use map + batch_scrape)
When you need fast results (crawling can be slow)
Warning: Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.
Common mistakes:
Setting limit or maxDepth too high (causes token overflow)
Using crawl for a single page (use scrape instead)
Prompt Example:
"Get all blog posts from the first two levels of example.com/blog."
Usage Example:
Returns:
Response includes operation ID for status checking:
7. Check Crawl Status (firecrawl_check_crawl_status)
Check the status of a crawl job.
Returns:
Response includes the status of the crawl job:
8. Extract Tool (firecrawl_extract)
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
Best for:
Extracting specific structured data like prices, names, details.
Not recommended for:
When you need the full content of a page (use scrape)
When you're not looking for specific structured data
Arguments:
urls: Array of URLs to extract information fromprompt: Custom prompt for the LLM extractionsystemPrompt: System prompt to guide the LLMschema: JSON schema for structured data extractionallowExternalLinks: Allow extraction from external linksenableWebSearch: Enable web search for additional contextincludeSubdomains: Include subdomains in extraction
When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service. Prompt Example:
"Extract the product name, price, and description from these product pages."
Usage Example:
Returns:
Extracted structured data as defined by your schema
9. Agent Tool (firecrawl_agent)
Autonomous web research agent. This is a separate AI agent layer that independently browses the internet, searches for information, navigates through pages, and extracts structured data based on your query.
How it works:
The agent performs web searches, follows links, reads pages, and gathers data autonomously. This runs asynchronously - it returns a job ID immediately, and you poll firecrawl_agent_status to check when complete and retrieve results.
Async workflow:
Call
firecrawl_agentwith your prompt/schema → returns job IDDo other work while the agent researches (can take minutes for complex queries)
Poll
firecrawl_agent_statuswith the job ID to check progressWhen status is "completed", the response includes the extracted data
Best for:
Complex research tasks where you don't know the exact URLs
Multi-source data gathering
Finding information scattered across the web
Tasks where you can do other work while waiting for results
Not recommended for:
Simple single-page scraping where you know the URL (use scrape with JSON format - faster and cheaper)
Arguments:
prompt: Natural language description of the data you want (required, max 10,000 characters)urls: Optional array of URLs to focus the agent on specific pagesschema: Optional JSON schema for structured output
Prompt Example:
"Find the founders of Firecrawl and their backgrounds"
Usage Example (start agent, then poll for results):
Then poll with firecrawl_agent_status using the returned job ID.
Usage Example (with URLs - agent focuses on specific pages):
Returns:
Job ID for status checking. Use
firecrawl_agent_statusto poll for results.
10. Check Agent Status (firecrawl_agent_status)
Check the status of an agent job and retrieve results when complete. Use this to poll for results after starting an agent.
Polling pattern: Agent research can take minutes for complex queries. Poll this endpoint periodically (e.g., every 10-30 seconds) until status is "completed" or "failed".
Possible statuses:
processing: Agent is still researching - check back latercompleted: Research finished - response includes the extracted datafailed: An error occurred
Logging System
The server includes comprehensive logging:
Operation status and progress
Performance metrics
Credit usage monitoring
Rate limit tracking
Error conditions
Example log messages:
Error Handling
The server provides robust error handling:
Automatic retries for transient errors
Rate limit handling with backoff
Detailed error messages
Credit usage warnings
Network resilience
Example error response:
Development
Contributing
Fork the repository
Create your feature branch
Run tests:
npm testSubmit a pull request
Thanks to contributors
Thanks to @vrknetha, @cawstudios for the initial implementation!
Thanks to MCP.so and Klavis AI for hosting and @gstarwd, @xiangkaiz and @zihaolin96 for integrating our server.
License
MIT License - see LICENSE file for details