Skip to main content
Glama

MCP-Crawl4AI

A Model Context Protocol server for web crawling powered by Crawl4AI

CI Codecov PyPI

Ruff Pre-commit Dependabot

Python 3.13+ License: MIT

MCP Compatible FastMCP v3 Crawl4AI Docker Documentation

GitHub stars GitHub last commit


Overview

MCP-Crawl4AI is a Model Context Protocol server that gives AI systems access to the live web. Built on FastMCP v3 and Crawl4AI, it exposes 4 tools, 2 resources, and 3 prompts through the standardized MCP interface, backed by a lifespan-managed headless Chromium browser.

Only 2 runtime dependenciesfastmcp and crawl4ai.

Key Features

  • Full MCP compliance via FastMCP v3 with tool annotations (readOnlyHint, destructiveHint, etc.)

  • 4 focused tools centered on canonical scrape/crawl plus session lifecycle/artifacts

  • 3 prompts for common LLM workflows (summarize, extract schema, compare pages)

  • 2 resources exposing server configuration and version info

  • Headless Chromium managed as a lifespan singleton (start once, reuse everywhere)

  • Multiple transports — stdio (default) and Streamable HTTP

  • LLM-optimized output — markdown, cleaned HTML, raw HTML, or plain text

  • Canonical option groups for extraction, runtime, diagnostics, sessions, rendering, and traversal

  • List and deep traversal in one crawl contract

  • Session-aware workflows with explicit session close and artifact retrieval tools

  • Auto browser setup — detects missing Playwright browsers and installs automatically


Installation

pip install mcp-crawl4ai
mcp-crawl4ai --setup       # one-time: installs Playwright browsers
uv add mcp-crawl4ai
mcp-crawl4ai --setup       # one-time: installs Playwright browsers
docker build -t mcp-crawl4ai .
docker run -p 8000:8000 mcp-crawl4ai

The Docker image includes Playwright browsers — no separate setup needed.

git clone https://github.com/wyattowalsh/mcp-crawl4ai.git
cd mcp-crawl4ai
uv sync --group dev
mcp-crawl4ai --setup
NOTE

The server auto-detects missing Playwright browsers on first startup and attempts to install them automatically. You can also runmcp-crawl4ai --setup or crawl4ai-setup manually at any time.


Quick Start

mcp-crawl4ai
mcp-crawl4ai --transport http --port 8000
NOTE

HTTP binds to127.0.0.1 by default (private/local only); for external exposure, set --host explicitly and use a reverse proxy for TLS/auth.

Add to your Claude Desktop MCP settings (claude_desktop_config.json):

{
  "mcpServers": {
    "crawl4ai": {
      "command": "mcp-crawl4ai",
      "args": ["--transport", "stdio"]
    }
  }
}
claude mcp add crawl4ai -- mcp-crawl4ai --transport stdio
npx @modelcontextprotocol/inspector uv --directory . run mcp-crawl4ai

Tools

The canonical surface now exposes 4 tools:

scrape

Scrape one URL or a bounded list of URLs (up to 20) with a single canonical envelope response.

  • Input: targets (str | list[str]) and optional grouped options

  • Supports extraction (schema, extraction_mode), runtime controls, diagnostics, session settings, render settings, and artifact capture

  • Returns canonical JSON envelope with schema_version, tool, ok, data/items, meta, warnings, error

crawl

Crawl with canonical traversal controls.

  • options.traversal.mode="list" for bounded list traversal

  • options.traversal.mode="deep" for recursive BFS/DFS traversal from a single seed

  • Shares scrape option groups plus traversal options in the same envelope shape

close_session

Close a stateful session created via options.session.session_id.

get_artifact

Retrieve artifact metadata/content captured during scrape or crawl when options.conversion.capture_artifacts is enabled.

Choosing core vs advanced usage

  • Core path (recommended): use scrape/crawl with minimal options (runtime, conversion.output_format, traversal.mode="list"). This keeps behavior predictable and low-risk for most agent workflows.

  • Advanced path (explicit opt-in): use deep traversal, custom dispatcher controls, JS transforms, extraction schemas, and artifact capture only when required by task outcomes.

  • Safety budgets and gates: inspect config://server for settings.defaults, settings.limits, settings.policies, and settings.capabilities to understand active constraints and feature gates before enabling advanced options.


Resources

URI

MIME Type

Description

config://server

application/json

Current server configuration: name, version, tool list, browser config

crawl4ai://version

application/json

Server and dependency version information (server, crawl4ai, fastmcp)

Prompts

Prompt

Parameters

Description

summarize_page

url, focus (default: "key points")

Crawl a page and summarize its content with the specified focus

build_extraction_schema

url, data_type

Inspect a page and build a CSS extraction schema for scrape

compare_pages

url1, url2

Crawl two pages and produce a structured comparison


Architecture

graph TD
    A[MCP Client] -->|stdio / HTTP| B[FastMCP v3]
    B --> C[Tool Router]
    C --> D[scrape]
    C --> E[crawl]
    C --> F[close_session]
    C --> G[get_artifact]
    D & E & F & G --> N[AsyncWebCrawler Singleton]
    N --> O[Headless Chromium]
    B --> P[Resources]
    B --> Q[Prompts]

    style B fill:#4B8BBE,color:#fff
    style N fill:#FF6B35,color:#fff
    style O fill:#2496ED,color:#fff

The server uses a single-module architecture:

  • FastMCP v3 handles MCP protocol negotiation, transport, tool/resource/prompt registration, and message routing

  • Lifespan-managed AsyncWebCrawler starts a headless Chromium browser once at server startup and shares it across all tool invocations, then shuts it down cleanly on exit

  • 4 tool functions decorated with @mcp.tool() define the canonical surface

  • 2 resource functions decorated with @mcp.resource() return JSON

  • 3 prompt functions decorated with @mcp.prompt return structured Message lists

IMPORTANT

There are no intermediate manager classes or custom HTTP clients. The server delegates all crawling to crawl4ai'sAsyncWebCrawler and all protocol handling to FastMCP. Only 2 runtime dependencies.


Configuration

Flag

Default

Description

--transport

stdio

Transport protocol: stdio or http

--host

127.0.0.1

Host to bind (HTTP transport only)

--port

8000

Port to bind (HTTP transport only)

--setup

Install Playwright browsers and exit

No environment variables are required. The server uses sensible defaults for all configuration. Crawl4AI's own environment variables (e.g., CRAWL4AI_VERBOSE) are respected if set.


Testing

# Run all tests
uv run pytest

# Coverage gate (>=90%)
uv run pytest --cov=mcp_crawl4ai --cov-report=term-missing

# Smoke tests only
uv run pytest -m smoke

# Unit tests only
uv run pytest -m unit

# Integration workflow tests
uv run pytest -m integration

# End-to-end workflow tests
uv run pytest -m e2e

# Manual live test (requires browser)
uv run python tests/manual/test_live.py
NOTE

All automated tests run in-memory usingfastmcp.Client(mcp) — no browser or network required. The test suite mocks AsyncWebCrawler for fast, deterministic execution.


Contributing

See the Contributing Guide for details on setting up your development environment, coding standards, and the pull request process.

License

This project is licensed under the MIT License. See the LICENSE file for details.


MCP-Crawl4AIConnecting AI to the Live Web

-
security - not tested
A
license - permissive license
-
quality - not tested

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wyattowalsh/mcp-crawl4ai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server