mcp-server-webcrawl

Integrations

  • Enables browsing and analyzing web content crawled by Katana, with support for accessing and searching through cached text files.

  • Requires Python 3.10 or newer to run, with installation via pip package manager.

mcp-server-webcrawl

Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more.

mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:

mcp-server-webcrawl is free and open source, and requires Claude Desktop, Python (>=3.10). It is installed on the command line, via pip install:

pip install mcp-server-webcrawl

Features

  • Claude Desktop ready
  • Fulltext search support
  • Filter by type, status, and more
  • Multi-crawler compatible
  • Quick MCP configuration
  • ChatGPT support coming soon

MCP Configuration

From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.

You can set up more mcp-server-webcrawl connections under mcpServers as needed.

{ "mcpServers": { "webcrawl": { "command": "mcp-server-webcrawl", "args": [varies by crawler, see below] } } }

Important Note for macOS Users

On macOS, you must use the absolute path to the mcp-server-webcrawl executable in the command field, rather than just the command name. This is different from Windows configuration.

For example:

"command": "/Users/yourusername/.local/bin/mcp-server-webcrawl",

To find the absolute path of the mcp-server-webcrawl executable on your system:

  1. Open Terminal
  2. Run which mcp-server-webcrawl
  3. Copy the full path returned and use it in your config file

wget (using --mirror)

The datasrc argument should be set to the parent directory of the mirrors.

"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]

WARC

The datasrc argument should be set to the parent directory of the WARC files.

"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]

InterroBot

The datasrc argument should be set to the direct path to the database.

"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]

Katana

The datasrc argument should be set to the parent directory of the text cache files.

"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]

SiteOne (using archiving)

The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.

"args": ["--crawler", "siteone", "--datasrc", "/path/to/SiteOne/archives/"]
-
security - not tested
F
license - not found
-
quality - not tested

local-only server

The server can only run on the client's local machine because it depends on local resources.

Bridge the gap between your web crawl and AI language models. With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously, extracting insights from your web content.

Supports WARC, wget, InterroBot, Katana, and SiteOne crawlers.

  1. Features
    1. MCP Configuration
      1. Important Note for macOS Users
      2. wget (using --mirror)
      3. WARC
      4. InterroBot
      5. Katana
      6. SiteOne (using archiving)

    Related MCP Servers

    • -
      security
      A
      license
      -
      quality
      Crawl4AI MCP Server is an intelligent information retrieval server offering robust search capabilities and LLM-optimized web content understanding, utilizing multi-engine search and intelligent content extraction to efficiently gather and comprehend internet information.
      Last updated -
      36
      Python
      MIT License
      • Apple
      • Linux
    • A
      security
      A
      license
      A
      quality
      A headless browser MCP server that allows AI agents to fetch web content and perform Google searches without API keys, supporting various output formats like Markdown, JSON, HTML, and text.
      Last updated -
      2
      4
      TypeScript
      MIT License
    • -
      security
      A
      license
      -
      quality
      An MCP server that helps AI assistants access text content from websites that implement bot detection, bridging the gap between what you can see in your browser and what the AI can access.
      Last updated -
      1
      Python
      Apache 2.0
    • -
      security
      F
      license
      -
      quality
      An MCP server that crawls API documentation websites and exposes their content to AI models, enabling them to search, browse, and reference API specifications.
      Last updated -
      Python

    View all related MCP servers

    ID: 6roqjljpg8