index.html•20.6 kB
<!DOCTYPE html>
<html lang="en">
<head>
<title>mcp-server-webcrawl | Help</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="description" content="Help section for mcp-server-webcrawl (MCP server)." />
<link rel="shortcut icon" href="../../media/static/images/mcp-server-webcrawl/favicon.b04adb6828.png"/>
<link type="text/css" rel="stylesheet" href="../../media/static/styles/css/mcp.min.b04adb6828.css" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="og:image" content="https://pragmar.com/media/static/images/mcp-server-webcrawl/og-mcp-server-webcrawl.png?202505251919" />
<meta name="og:description" content="Help section for mcp-server-webcrawl (MCP server)." />
<meta name="og:title" content="mcp-server-webcrawl | Help" />
<meta name="twitter:card" content="summary" />
<script>var _SiteOneUrlDepth = 3;</script></head>
<body>
<header>
<div>
<div class="constrain header__wrap">
<nav class="links">
<a href="../../index.html">Home</a>
<a href="../../mcp-server-webcrawl/help/index.html">Help</a>
<a href="https://github.com/pragmar/mcp-server-webcrawl">Github</a>
</nav>
<h1 class="header__main">
<a href="../../mcp-server-webcrawl/index.html">
<span class="accessible">mcp-server-webcrawl</span>
<img src="../../media/static/images/mcp-server-webcrawl/mcpswc.b04adb6828.svg" alt="mcp-server-webcrawl logo and visual metaphors alluding to DC adapter interchange"/>
</a>
</h1>
</div>
</div>
</header>
<main>
<div class="constrain">
<h2>What is this and what can it do?</h2>
<div class="summary">
<div>
<p>
<strong>mcp-server-webcrawl</strong> is a <a href="https://modelcontextprotocol.io/">MCP server</a>
that runs on your computer. It creates
a gateway to web crawler archives so that language models (OpenAI, Claude) can
filter, and analyze or process the data, automously or under your direction.
Use <strong>mcp-server-webcrawl</strong> for technical inference, content management,
marketing, SEO, and more. The sky is the limit!
</p>
<p>
The server supports a host of web crawl options including wget and WARC archival format.
<a href="https://interro.bot/">InterroBot</a>,
<a href="https://github.com/projectdiscovery/katana">Katana</a>, and
<a href="https://crawler.siteone.io/">SiteOne</a> crawlers are also supported.
</p>
<p>
<strong>mcp-server-webcrawl</strong> is free and <a href="https://github.com/pragmar/mcp-server-webcrawl">open source</a>.
</p>
</div>
</div>
<h2>Requirements</h2>
<div class="summary alternate">
<div>
<p>
Claude Desktop (macOS/Windows) currently has what is necessary to run
<strong>mcp-server-webcrawl</strong>. Other MCP clients have not yet been tested.
In addition to the Claude Desktop, you'll need
to have installed <a href="https://www.python.org/downloads/">Python</a> (>=3.10).
</p>
<p>
With Python installed, you should now have "pip" access on Terminal (macOS) or Powershell
(Windows). You can install <strong>mcp-server-webcrawl</strong> with the following command.
</p>
<p>
<pre class="summary__pip">pip install mcp-server-webcrawl</pre>
</p>
<p>
At time of writing, OpenAI support for MCP was announced, but nothing tangible yet.
Hang tight!
</p>
</div>
</div>
<h2>Underlying API</h2>
<div class="summary">
<div>
<p>The API is <em>supposed</em> to stay out your way, and to a large degree
it can be navigated autonomously by your MCP client. Sometimes, however,
you may need to nudge the LLM to the correct field or search strategy. The
following is the current API interface for your reference.
</p>
<h3>webcrawl_sites</h3>
<p>This tool retrieves a list of sites (project websites or crawl directories).</p>
<table class="tool">
<thead>
<tr><th><p>Parameter</p></th><th><p>Type</p></th><th><p>Required</p></th><th><p>Description</p></th></tr>
</thead>
<tbody>
<tr><td><p>ids</p></td><td><p>array<int></p></td><td><p>No</p></td><td><p>List of project IDs to retrieve. Leave empty for all projects.</p></td></tr>
<tr><td><p>fields</p></td><td><p>array<string></p></td><td><p>No</p></td><td><p>List of additional fields to include beyond defaults (id, url). Empty list means default fields only. Options include created (ISO 8601), modified (ISO 8601), and norobots (str).</p></td></tr>
</tbody>
</table>
<h3>webcrawl_search</h3>
<p>This tool searches for resources (webpages, CSS, images, etc.) across projects and retrieves specified fields.</p>
<table class="tool">
<thead>
<tr><th><p>Parameter</p></th><th><p>Type</p></th><th><p>Required</p></th><th><p>Description</p></th></tr>
</thead>
<tbody>
<tr><td><p>sites</p></td><td><p>array<int></p></td><td><p>No</p></td><td><p>Optional list of project IDs to filter search results to specific sites. In most scenarios, you'd filter to only one site.</p></td></tr>
<tr><td><p>query</p></td><td><p>string</p></td><td><p>No</p></td><td>
<p>Fulltext search query string. Leave empty to return all resources when filtering on other fields for better
precision. Supports fulltext and boolean operators (AND, OR, NOT), quoted phrases, and
suffix wildcards, but not prefix wildcards. See below for complete boolean and field search capabilities.</p></td></tr>
<!--
<tr><td><p>statuses</p></td><td><p>array<int></p></td><td><p>No</p></td><td><p>Optional list of HTTP status codes to filter results. Example: [200] returns only successful resources, [404, 500] returns only resources with Not Found or Server Error.</p></td></tr>
<tr><td><p>types</p></td><td><p>array<string></p></td><td><p>No</p></td><td><p>Optional filter for specific resource types. Allowed values include: headers, content, size, name, time.</p></td></tr>
-->
<tr><td><p>fields</p></td><td><p>array<string></p></td><td><p>No</p></td><td>
<p>List of additional fields to include beyond defaults (modified, created). Empty list means default fields only. The content field can lead to large results and should be used with LIMIT.</p></td></tr>
<tr><td><p>sort</p></td><td><p>string</p></td><td><p>No</p></td><td><p>Sort order for results. Prefixed with + for ascending, - for descending. ? is a special option for random sort, useful in statistical sampling. Options include: +id, -id, +url, -url, +status, -status, ?.</p></td></tr>
<tr><td><p>limit</p></td><td><p>integer</p></td><td><p>No</p></td><td><p>Maximum number of results to return. Default is 20, max is 100.</p></td></tr>
<tr><td><p>offset</p></td><td><p>integer</p></td><td><p>No</p></td><td><p>Number of results to skip for pagination. Default is 0.</p></td></tr>
<tr><td><p>extras</p></td><td><p>array<string></p></td><td><p>No</p></td><td>
<p>Optional array of extra features to include in results. Options include markdown, snippets, and thumbnails. (see extras table)</p>
</td></tr>
</tbody>
</table>
<h3>Crawler Features Support</h3>
<p>API support, by parameter, across crawler type.</p>
<table class="featureset">
<thead>
<tr><th>Parameter</th><th>wget</th><th>WARC</th><th>InterroBot</th><th>Katana</th><th>SiteOne</th></tr>
</thead>
<tbody>
<tr><td>Sites/ids</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>Sites/fields</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>Search/ids</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>Search/sites</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>Search/query</td><td>①</td><td>✔</td><td>✔</td><td>✔</td><td>①</td></tr>
<tr><td>Search/fields</td><td>②</td><td>✔</td><td>✔</td><td>✔</td><td>②</td></tr>
<tr><td>Search/sort</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>Search/limit</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>Search/offset</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>Search/extras</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
</tbody>
</table>
<h3>Crawler Field Support</h3>
<p>API support, by field, across crawler type.</p>
<table class="featureset">
<thead>
<tr><th>Parameter</th><th>wget</th><th>WARC</th><th>InterroBot</th><th>Katana</th><th>SiteOne</th></tr>
</thead>
<tbody>
<tr><td>id</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>url</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>type</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>status</td><td>③</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>size</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
<tr><td>headers</td><td></td><td>✔</td><td>✔</td><td>✔</td><td></td></tr>
<tr><td>content</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td></tr>
</tbody>
</table>
<p>
①②③ wget (--mirror) does not index HTTP status beyond 200 OK (HTTP errors not saved to disk).
wget and SiteOne crawler implementations do not support field searchable HTTP headers. When used in
WARC mode (as opposed to simple mirror), wget is capable of collecting HTTP headers
and status.
</p>
<p>
Crawlers have strengths and weaknesses, judge them on how well they
fit your needs. Don't worry too
much about field support. You probably don't need HTTP headers, except for
specialized web-dev, honestly. They all support fulltext boolean search
across the crawl data.
</p>
<h3>Boolean Search Syntax</h3>
<p>The query engine supports field-specific (<code>field: value</code>) searches and complex boolean
expressions. Fulltext is supported as a combination of the url, content, and headers fields.</p>
<p>
While the API interface is designed to be consumed by the LLM directly, it can be helpful
to familiarize yourself with the search syntax. Searches generated by the LLM are
inspectable, but generally collapsed in the UI. If you need to see the query, expand
the MCP collapsable.
</p>
<table class="featureset featureset2">
<thead>
<tr><th>Query Example</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>privacy</td><td>fulltext single keyword match</td></tr>
<tr><td>"privacy policy"</td><td>fulltext match exact phrase</td></tr>
<tr><td>boundar*</td><td>fulltext wildcard matches results starting with <em>boundar</em> (boundary, boundaries)</td></tr>
<tr><td>id: 12345</td><td>id field matches a specific resource by ID</td></tr>
<tr><td>url: example.com/*</td><td>url field matches results with URL containing example.com/</td></tr>
<tr><td>type: html</td><td>type field matches for HTML pages only</td></tr>
<tr><td>status: 200</td><td>status field matches specific HTTP status codes (equal to 200)</td></tr>
<tr><td>status: >=400</td><td>status field matches specific HTTP status code (greater than or equal to 400)</td></tr>
<tr><td>content: h1</td><td>content field matches content (HTTP response body, often, but not always HTML)</td></tr>
<tr><td>headers: text/xml</td><td>headers field matches HTTP response headers</td></tr>
<tr><td>privacy AND policy</td><td>fulltext matches both</td></tr>
<tr><td>privacy OR policy</td><td>fulltext matches either</td></tr>
<tr><td>policy NOT privacy</td><td>fulltext matches policies not containing privacy</td></tr>
<tr><td>(login OR signin) AND form</td><td>fulltext matches fullext login or signin with form</td></tr>
<tr><td>type: html AND status: 200</td><td>fulltext matches only HTML pages with HTTP success</td></tr>
</tbody>
</table>
<h3>Field Search Definitions</h3>
<p>
Field search provides search precision, allowing you to specify which columns of the search index to filter.
Rather than searching the entire content, you can restrict your query to specific attributes like URLs,
headers, or content body. This approach improves efficiency when looking for
specific attributes or patterns within crawl data.
</p>
<table class="featureset featureset2">
<thead>
<tr><th>Field</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>id</td><td>database ID</td></tr>
<tr><td>url</td><td>resource URL</td></tr>
<tr><td>type</td><td>enumerated list of types (see types table)</td></tr>
<tr><td>status</td><td>HTTP response codes</td></tr>
<tr><td>headers</td><td>HTTP response headers</td></tr>
<tr><td>content</td><td>HTTP body—HTML, CSS, JS, and more</td></tr>
</tbody>
</table>
<h3>Content Types</h3>
<p>
Crawls contain a multitude of resource types beyond HTML pages. The <code>type:</code> field search
allows filtering by broad content type groups, particularly useful when filtering images without complex extension queries.
For example, you might search for <code>type: html NOT content: login</code>
to find pages without "login," or <code>type: img</code> to analyze image resources. The table below lists all
supported content types in the search system.
</p>
<table class="featureset featureset2">
<thead>
<tr><th>Type</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>html</td><td>webpages</td></tr>
<tr><td>iframe</td><td>iframes</td></tr>
<tr><td>img</td><td>web images</td></tr>
<tr><td>audio</td><td>web audio files</td></tr>
<tr><td>video</td><td>web video files</td></tr>
<tr><td>font</td><td>web font files</td></tr>
<tr><td>style</td><td>CSS stylesheets</td></tr>
<tr><td>script</td><td>JavaScript files</td></tr>
<tr><td>rss</td><td>RSS syndication feeds</td></tr>
<tr><td>text</td><td>plain text content</td></tr>
<tr><td>pdf</td><td>PDF files</td></tr>
<tr><td>doc</td><td>MS Word documents</td></tr>
<tr><td>other</td><td>uncategorized</td></tr>
</tbody>
</table>
<h3>Extras</h3>
<p>
The <code>extras</code> parameter provides additional processing options,
transforming result data (markdown, snippets), or connecting the LLM to external data (thumbnails).
These options can be combined as needed to achieve the desired result format.
</p>
<table class="featureset featureset2">
<thead>
<tr><th>Extra</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>thumbnails</td><td>Generates base64 encoded images to be viewed and analyzed by AI models. Enables image description, content analysis, and visual understanding while keeping token output minimal. Works with images, which can be filtered using <code>type: img</code> in queries. SVG is not supported.</td></tr>
<tr><td>markdown</td><td>Provides the HTML content field as concise markdown, reducing token usage and improving readability for LLMs. Works with HTML, which can be filtered using <code>type: html</code> in queries.</td></tr>
<tr><td>snippets</td><td>Matches fulltext queries to contextual keyword usage within the content. When used without requesting the content field (or markdown extra), it can provide an efficient means of refining a search without pulling down the complete page contents. Also great for rendering old school hit-highlighted results as a list, like Google search in 1999. Works with HTML, CSS, JS, or any text-based, crawled file.</td></tr>
</tbody>
</table>
</div>
</div>
<div class="tabbed__visualization">
<img src="../../media/static/images/mcp-server-webcrawl/netwww.b04adb6828.svg" alt="Abstraction of LLM clients (Claude and OpenAI) communicating with a website archive"/>
</div>
</div>
</main>
<footer>
<nav class="pragmar">
<div class="pragmar__also">Software by <a href="../../index.html">Pragmar</a></div>
<div class="pragmar__products__wrap">
<div class="pragmar__products">
<a class="pragmar__product interrobot" href="https://interro.bot/?utm_source=pragmar.com">
<img src="../../media/static/images/home/interrobot.b04adb6828.png" alt="InterroBot icon"/>
<div><strong>InterroBot</strong>.
Web crawler and analyzer. Free/paid.</div>
</a>
<a class="pragmar__product appstat" href="../../appstat/index.html">
<img src="../../media/static/images/home/appstat.b04adb6828.png" alt="appstat icon"/>
<div><strong>appstat</strong>.
Windows process monitor. Free.</div>
</a>
<a class="pragmar__product moffitor" href="../../moffitor/index.html">
<img src="../../media/static/images/home/moffitor.b04adb6828.png" alt="Moffitor icon"/>
<div><strong>Moffitor</strong>.
One-click monitor sleep. Free.</div>
</a>
<a class="pragmar__product qbit" href="../../qbit/index.html">
<img src="../../media/static/images/home/qbit.b04adb6828.png" alt="Qbit icon"/>
<div><strong>Qbit</strong>.
Skybox generator for game devs. Free/paid.</div>
</a>
</div>
</div>
</nav>
</footer>
<script src="../../media/static/scripts/js/main.min.b04adb6828.js"></script>
</body>
</html>