Skip to main content
Glama

mcp-server-webcrawl

mcp-server-webcrawl

Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more.

mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:

mcp-server-webcrawl is free and open source, and requires Claude Desktop and Python (>=3.10). It is installed on the command line, via pip install:

pip install mcp-server-webcrawl

Features

  • Claude Desktop ready

  • Fulltext search support

  • Filter by type, status, and more

  • Multi-crawler compatible

  • Supports advanced/boolean and field searching

MCP Configuration

From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.

You can set up more mcp-server-webcrawl connections under mcpServers as needed.

{ "mcpServers": { "webcrawl": { "command": [varies by OS/env, see below], "args": [varies by crawler, see below] } } }

For step-by-step setup, refer to the Setup Guides.

Windows vs. macOS

Windows: command set to "mcp-server-webcrawl"

macOS: command set to absolute path, i.e. the value of $ which mcp-server-webcrawl

For example:

"command": "/Users/yourusername/.local/bin/mcp-server-webcrawl",

To find the absolute path of the mcp-server-webcrawl executable on your system:

  1. Open Terminal

  2. Run which mcp-server-webcrawl

  3. Copy the full path returned and use it in your config file

wget (using --mirror)

The datasrc argument should be set to the parent directory of the mirrors.

"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]

WARC

The datasrc argument should be set to the parent directory of the WARC files.

"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]

InterroBot

The datasrc argument should be set to the direct path to the database.

"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]

Katana

The datasrc argument should be set to the directory of root hosts. Katana separates pages and media by hosts, ./archives/example.com/example.com is expected, and appropriate. More complicated sites expand the crawl data into origin host directories.

"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]

SiteOne (using Generate offline website)

The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.

"args": ["--crawler", "siteone", "--datasrc", "/path/to/SiteOne/archives/"]

Boolean Search Syntax

The query engine supports field-specific (field: value) searches and complex boolean expressions. Fulltext is supported as a combination of the url, content, and headers fields.

While the API interface is designed to be consumed by the LLM directly, it can be helpful to familiarize yourself with the search syntax. Searches generated by the LLM are inspectable, but generally collapsed in the UI. If you need to see the query, expand the MCP collapsable.

Example Queries

Query Example

Description

privacy

fulltext single keyword match

"privacy policy"

fulltext match exact phrase

boundar*

fulltext wildcard matches results starting with

boundar

(boundary, boundaries)

id: 12345

id field matches a specific resource by ID

url: example.com/*

url field matches results with URL containing example.com/

type: html

type field matches for HTML pages only

status: 200

status field matches specific HTTP status codes (equal to 200)

status: >=400

status field matches specific HTTP status code (greater than or equal to 400)

content: h1

content field matches content (HTTP response body, often, but not always HTML)

headers: text/xml

headers field matches HTTP response headers

privacy AND policy

fulltext matches both

privacy OR policy

fulltext matches either

policy NOT privacy

fulltext matches policies not containing privacy

(login OR signin) AND form

fulltext matches fullext login or signin with form

type: html AND status: 200

fulltext matches only HTML pages with HTTP success

Field Search Definitions

Field search provides search precision, allowing you to specify which columns of the search index to filter. Rather than searching the entire content, you can restrict your query to specific attributes like URLs, headers, or content body. This approach improves efficiency when looking for specific attributes or patterns within crawl data.

Field

Description

id

database ID

url

resource URL

type

enumerated list of types (see types table)

status

HTTP response codes

headers

HTTP response headers

content

HTTP body—HTML, CSS, JS, and more

Content Types

Crawls contain a multitude of resource types beyond HTML pages. The type: field search allows filtering by broad content type groups, particularly useful when filtering images without complex extension queries. For example, you might search for type: html NOT content: login to find pages without "login," or type: img to analyze image resources. The table below lists all supported content types in the search system.

Type

Description

html

webpages

iframe

iframes

img

web images

audio

web audio files

video

web video files

font

web font files

style

CSS stylesheets

script

JavaScript files

rss

RSS syndication feeds

text

plain text content

pdf

PDF files

doc

MS Word documents

other

uncategorized

-
security - not tested
F
license - not found
-
quality - not tested

local-only server

The server can only run on the client's local machine because it depends on local resources.

Bridge the gap between your web crawl and AI language models. With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously, extracting insights from your web content.

Supports WARC, wget, InterroBot, Katana, and SiteOne crawlers.

  1. Features
    1. MCP Configuration
      1. Windows vs. macOS
      2. wget (using --mirror)
      3. WARC
      4. InterroBot
      5. Katana
      6. SiteOne (using Generate offline website)
    2. Boolean Search Syntax
      1. Field Search Definitions
        1. Content Types

          Related MCP Servers

          • -
            security
            A
            license
            -
            quality
            Crawl4AI MCP Server is an intelligent information retrieval server offering robust search capabilities and LLM-optimized web content understanding, utilizing multi-engine search and intelligent content extraction to efficiently gather and comprehend internet information.
            Last updated -
            114
            MIT License
            • Apple
            • Linux
          • A
            security
            F
            license
            A
            quality
            An MCP server that enables AI clients like Cursor, Windsurf, and Claude Desktop to access web content in markdown format, providing web unblocking and searching capabilities.
            Last updated -
            2
            30
            41
            • Apple
          • -
            security
            F
            license
            -
            quality
            An MCP server that crawls API documentation websites and exposes their content to AI models, enabling them to search, browse, and reference API specifications.
            Last updated -
          • A
            security
            F
            license
            A
            quality
            An MCP Server for Web scraping and Crawling, built using Crawl4AI
            Last updated -
            2
            25

          View all related MCP servers

          MCP directory API

          We provide all the information about MCP servers via our MCP API.

          curl -X GET 'https://glama.ai/api/mcp/v1/servers/pragmar/mcp_server_webcrawl'

          If you have feedback or need assistance with the MCP directory API, please join our Discord server