MCP Server Firecrawl

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Integrations

  • Supports integration with Claude applications, allowing the MCP server to provide web scraping and content search capabilities to Claude via desktop app and VSCode extension.
  • Supports using .env files to configure the Firecrawl API key for secure development environments.
  • Provides support for structured data extraction with JSON schema validation and custom JSON output options.
  • Supports outputting scraped content in Markdown format for better readability and structure.
  • Provides TypeScript interfaces for the API, with examples written in TypeScript for better type safety and developer experience.

Firecrawl MCP Server

A Model Context Protocol (MCP) server for web scraping, content searching, site crawling, and data extraction using the Firecrawl API.

Features

  • Web Scraping: Extract content from any webpage with customizable options
    • Mobile device emulation
    • Ad and popup blocking
    • Content filtering
    • Structured data extraction
    • Multiple output formats
  • Content Search: Intelligent search capabilities
    • Multi-language support
    • Location-based results
    • Customizable result limits
    • Structured output formats
  • Site Crawling: Advanced web crawling functionality
    • Depth control
    • Path filtering
    • Rate limiting
    • Progress tracking
    • Sitemap integration
  • Site Mapping: Generate site structure maps
    • Subdomain support
    • Search filtering
    • Link analysis
    • Visual hierarchy
  • Data Extraction: Extract structured data from multiple URLs
    • Schema validation
    • Batch processing
    • Web search enrichment
    • Custom extraction prompts

Installation

# Global installation npm install -g @modelcontextprotocol/mcp-server-firecrawl # Local project installation npm install @modelcontextprotocol/mcp-server-firecrawl

Quick Start

  1. Get your Firecrawl API key from the developer portal
  2. Set your API key:Unix/Linux/macOS (bash/zsh):
    export FIRECRAWL_API_KEY=your-api-key
    Windows (Command Prompt):
    set FIRECRAWL_API_KEY=your-api-key
    Windows (PowerShell):
    $env:FIRECRAWL_API_KEY = "your-api-key"
    Alternative: Using .env file (recommended for development):
    # Install dotenv npm install dotenv # Create .env file echo "FIRECRAWL_API_KEY=your-api-key" > .env
    Then in your code:
    import dotenv from 'dotenv'; dotenv.config();
  3. Run the server:
    mcp-server-firecrawl

Integration

Claude Desktop App

Add to your MCP settings:

{ "firecrawl": { "command": "mcp-server-firecrawl", "env": { "FIRECRAWL_API_KEY": "your-api-key" } } }

Claude VSCode Extension

Add to your MCP configuration:

{ "mcpServers": { "firecrawl": { "command": "mcp-server-firecrawl", "env": { "FIRECRAWL_API_KEY": "your-api-key" } } } }

Usage Examples

Web Scraping

// Basic scraping { name: "scrape_url", arguments: { url: "https://example.com", formats: ["markdown"], onlyMainContent: true } } // Advanced extraction { name: "scrape_url", arguments: { url: "https://example.com/blog", jsonOptions: { prompt: "Extract article content", schema: { title: "string", content: "string" } }, mobile: true, blockAds: true } }

Site Crawling

// Basic crawling { name: "crawl", arguments: { url: "https://example.com", maxDepth: 2, limit: 100 } } // Advanced crawling { name: "crawl", arguments: { url: "https://example.com", maxDepth: 3, includePaths: ["/blog", "/products"], excludePaths: ["/admin"], ignoreQueryParameters: true } }

Site Mapping

// Generate site map { name: "map", arguments: { url: "https://example.com", includeSubdomains: true, limit: 1000 } }

Data Extraction

// Extract structured data { name: "extract", arguments: { urls: ["https://example.com/product1", "https://example.com/product2"], prompt: "Extract product details", schema: { name: "string", price: "number", description: "string" } } }

Configuration

See configuration guide for detailed setup options.

API Documentation

See API documentation for detailed endpoint specifications.

Development

# Install dependencies npm install # Build npm run build # Run tests npm test # Start in development mode npm run dev

Examples

Check the examples directory for more usage examples:

Error Handling

The server implements robust error handling:

  • Rate limiting with exponential backoff
  • Automatic retries
  • Detailed error messages
  • Debug logging

Security

  • API key protection
  • Request validation
  • Domain allowlisting
  • Rate limiting
  • Safe error messages

Contributing

See CONTRIBUTING.md for contribution guidelines.

License

MIT License - see LICENSE for details.

You must be authenticated.

A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

A server that provides web scraping and intelligent content searching capabilities using the Firecrawl API, enabling AI agents to extract structured data from websites and perform content searches.

  1. Features
    1. Installation
      1. Quick Start
        1. Integration
          1. Claude Desktop App
            1. Claude VSCode Extension
            2. Usage Examples
              1. Web Scraping
                1. Site Crawling
                  1. Site Mapping
                    1. Data Extraction
                    2. Configuration
                      1. API Documentation
                        1. Development
                          1. Examples
                            1. Error Handling
                              1. Security
                                1. Contributing
                                  1. License