Integrations
Allows running the MCP server as a container, with configuration options for both SSE and stdio transports
Supports integration with n8n, with special network configuration instructions for Docker environments
Planned future integration to enable running embedding models locally for complete privacy and control
A powerful implementation of the Model Context Protocol (MCP) integrated with Crawl4AI and Supabase for providing AI agents and AI coding assistants with advanced web crawling and RAG capabilities.
With this MCP server, you can scrape anything and then use that knowledge anywhere for RAG.
The primary goal is to bring this MCP server into Archon as I evolve it to be more of a knowledge engine for AI coding assistants to build AI agents. This first version of the Crawl4AI/RAG MCP server will be improved upon greatly soon, especially making it more configurable so you can use different embedding models and run everything locally with Ollama.
Overview
This MCP server provides tools that enable AI agents to crawl websites, store content in a vector database (Supabase), and perform RAG over the crawled content. It follows the best practices for building MCP servers based on the Mem0 MCP server template I provided on my channel previously.
Vision
The Crawl4AI RAG MCP server is just the beginning. Here's where we're headed:
- Integration with Archon: Building this system directly into Archon to create a comprehensive knowledge engine for AI coding assistants to build better AI agents.
- Multiple Embedding Models: Expanding beyond OpenAI to support a variety of embedding models, including the ability to run everything locally with Ollama for complete control and privacy.
- Advanced RAG Strategies: Implementing sophisticated retrieval techniques like contextual retrieval, late chunking, and others to move beyond basic "naive lookups" and significantly enhance the power and precision of the RAG system, especially as it integrates with Archon.
- Enhanced Chunking Strategy: Implementing a Context 7-inspired chunking approach that focuses on examples and creates distinct, semantically meaningful sections for each chunk, improving retrieval precision.
- Performance Optimization: Increasing crawling and indexing speed to make it more realistic to "quickly" index new documentation to then leverage it within the same prompt in an AI coding assistant.
Features
- Smart URL Detection: Automatically detects and handles different URL types (regular webpages, sitemaps, text files)
- Recursive Crawling: Follows internal links to discover content
- Parallel Processing: Efficiently crawls multiple pages simultaneously
- Content Chunking: Intelligently splits content by headers and size for better processing
- Vector Search: Performs RAG over crawled content, optionally filtering by data source for precision
- Source Retrieval: Retrieve sources available for filtering to guide the RAG process
Tools
The server provides four essential web crawling and search tools:
crawl_single_page
: Quickly crawl a single web page and store its content in the vector databasesmart_crawl_url
: Intelligently crawl a full website based on the type of URL provided (sitemap, llms-full.txt, or a regular webpage that needs to be crawled recursively)get_available_sources
: Get a list of all available sources (domains) in the databaseperform_rag_query
: Search for relevant content using semantic search with optional source filtering
Prerequisites
- Docker/Docker Desktop if running the MCP server as a container (recommended)
- Python 3.12+ if running the MCP server directly through uv
- Supabase (database for RAG)
- OpenAI API key (for generating embeddings)
Installation
Using Docker (Recommended)
- Clone this repository:Copy
- Build the Docker image:Copy
- Create a
.env
file based on the configuration section below
Using uv directly (no Docker)
- Clone this repository:Copy
- Install uv if you don't have it:Copy
- Create and activate a virtual environment:Copy
- Install dependencies:Copy
- Create a
.env
file based on the configuration section below
Database Setup
Before running the server, you need to set up the database with the pgvector extension:
- Go to the SQL Editor in your Supabase dashboard (create a new project first if necessary)
- Create a new query and paste the contents of
crawled_pages.sql
- Run the query to create the necessary tables and functions
Configuration
Create a .env
file in the project root with the following variables:
Running the Server
Using Docker
Using Python
The server will start and listen on the configured host and port.
Integration with MCP Clients
SSE Configuration
Once you have the server running with SSE transport, you can connect to it using this configuration:
Note for Windsurf users: Use
serverUrl
instead ofurl
in your configuration:CopyNote for Docker users: Use
host.docker.internal
instead oflocalhost
if your client is running in a different container. This will apply if you are using this MCP server within n8n!
Stdio Configuration
Add this server to your MCP configuration for Claude Desktop, Windsurf, or any other MCP client:
Docker with Stdio Configuration
Building Your Own Server
This implementation provides a foundation for building more complex MCP servers with web crawling capabilities. To build your own:
- Add your own tools by creating methods with the
@mcp.tool()
decorator - Create your own lifespan function to add your own dependencies
- Modify the
utils.py
file for any helper functions you need - Extend the crawling capabilities by adding more specialized crawlers
This server cannot be installed
Web crawling and RAG implementation that enables AI agents to scrape websites and perform semantic search over the crawled content, storing everything in Supabase for persistent knowledge retrieval.
Related MCP Servers
- AsecurityAlicenseAqualityThis server enables AI systems to integrate with Tavily's search and data extraction tools, providing real-time web information access and domain-specific searches.Last updated -25,133334JavaScriptMIT License
- AsecurityAlicenseAqualityA server that provides web scraping and intelligent content searching capabilities using the Firecrawl API, enabling AI agents to extract structured data from websites and perform content searches.Last updated -52TypeScriptMIT License
- -securityAlicense-qualityEmpowers AI agents to perform web browsing, automation, and scraping tasks with minimal supervision using natural language instructions and Selenium.Last updated -1PythonApache 2.0
- -security-license-qualityIntegrates with Dumpling AI to provide data scraping, content processing, knowledge management, and code execution capabilities through tools for web interactions, document handling, and AI services.Last updated -2JavaScriptMIT License