Enables searching for and retrieving academic papers from the arXiv repository for research purposes.
Integrates Brave as a source for web metasearch queries via an embedded SearXNG instance.
Provides stealth features to bypass Cloudflare anti-bot protections when performing web crawling or content extraction.
Supports synchronization of indexed documentation and cached data using Dropbox as a storage provider via rclone.
Utilizes DuckDuckGo as a metasearch source via SearXNG for privacy-focused web searches.
Facilitates the discovery and indexing of FastAPI library documentation for fast hybrid search.
Supports media discovery from GitHub repositories and utilizes GitHub tokens for higher rate limits during documentation discovery.
Incorporates Google as a metasearch engine source for comprehensive web search results.
Provides automatic synchronization of indexed documentation and cached web data using Google Drive via rclone.
Allows for specialized academic research searching and citation retrieval through Google Scholar.
Extracts and converts web content into clean Markdown format for efficient LLM processing.
Features anti-bot bypass capabilities to allow clean content extraction from Medium articles.
Supports offloading embedding and reranking tasks to custom AI workers hosted on the Modal platform.
Integrates with OpenAI for cloud-based embeddings and multimodal analysis of web media.
Enables specialized academic research searches within the PubMed medical database.
Facilitates searching and indexing of documentation for various Python libraries.
Uses rclone to manage the automatic synchronization of indexed documentation and cached content across different cloud storage providers.
Utilizes an embedded SearXNG instance to perform privacy-respecting metasearch across multiple engines like Google, Bing, and DuckDuckGo.
Provides integration for searching and analyzing academic literature through Semantic Scholar.
Allows for the discovery, indexing, and searching of documentation for the Spring and Spring Boot frameworks.
WET - Web Extended Toolkit MCP Server
mcp-name: io.github.n24q02m/wet-mcp
Open-source MCP Server for web search, content extraction, library docs & multimodal analysis.
Features
Web Search - Search via embedded SearXNG (metasearch: Google, Bing, DuckDuckGo, Brave)
Academic Research - Search Google Scholar, Semantic Scholar, arXiv, PubMed, CrossRef, BASE
Library Docs - Auto-discover and index documentation with FTS5 hybrid search
Content Extract - Extract clean content (Markdown/Text)
Deep Crawl - Crawl multiple pages from a root URL with depth control
Site Map - Discover website URL structure
Media - List and download images, videos, audio files
Anti-bot - Stealth mode bypasses Cloudflare, Medium, LinkedIn, Twitter
Local Cache - TTL-based caching for all web operations
Docs Sync - Sync indexed docs across machines via rclone
Related MCP server: Mnemo - Persistent AI Memory
Quick Start
Prerequisites
Python 3.13 (required -- Python 3.14+ is not supported due to SearXNG incompatibility)
Warning: You must specify
--python 3.13when usinguvx. Without it,uvxmay pick Python 3.14+ which causes SearXNG search to fail silently.
On first run, the server automatically installs SearXNG, Playwright chromium, and starts the embedded search engine.
The recommended way to run this server is via uvx:
uvx --python 3.13 wet-mcp@latestAlternatively, you can use
pipx run --python python3.13 wet-mcp.
Option 1: uvx (Recommended)
{
"mcpServers": {
"wet": {
"command": "uvx",
"args": ["--python", "3.13", "wet-mcp@latest"],
"env": {
// -- optional: LiteLLM Proxy (production, selfhosted gateway)
// "LITELLM_PROXY_URL": "http://10.0.0.20:4000",
// "LITELLM_PROXY_KEY": "sk-your-virtual-key",
// -- optional: cloud embedding (Gemini > OpenAI > Cohere) + media analysis
// -- without this, uses built-in local Qwen3-Embedding-0.6B + Qwen3-Reranker-0.6B (ONNX, CPU)
// -- first run downloads ~570MB model, cached for subsequent runs
"API_KEYS": "GOOGLE_API_KEY:AIza...",
// -- optional: custom endpoints (e.g. modalcom-ai-workers on Modal.com)
// "EMBEDDING_API_BASE": "https://your-worker.modal.run",
// "EMBEDDING_API_KEY": "your-key",
// "RERANK_API_BASE": "https://your-worker.modal.run",
// "RERANK_API_KEY": "your-key",
// -- optional: higher rate limits for docs discovery (60 -> 5000 req/hr)
"GITHUB_TOKEN": "ghp_...",
// -- optional: sync indexed docs across machines via rclone
// -- on first sync, a browser opens for OAuth (auto, no manual setup)
"SYNC_ENABLED": "true", // optional, default: false
"SYNC_INTERVAL": "300" // optional, auto-sync every 5min (0 = manual only)
// "SYNC_REMOTE": "gdrive", // optional, default: gdrive
// "SYNC_PROVIDER": "drive", // optional, default: drive (Google Drive)
}
}
}
}Option 2: Docker
{
"mcpServers": {
"wet": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"--name", "mcp-wet",
"-v", "wet-data:/data", // persists cached web pages, indexed docs, and downloads
"-e", "LITELLM_PROXY_URL", // optional: pass-through from env below
"-e", "LITELLM_PROXY_KEY", // optional: pass-through from env below
"-e", "API_KEYS", // optional: pass-through from env below
"-e", "EMBEDDING_API_BASE", // optional: pass-through from env below
"-e", "EMBEDDING_API_KEY", // optional: pass-through from env below
"-e", "RERANK_API_BASE", // optional: pass-through from env below
"-e", "RERANK_API_KEY", // optional: pass-through from env below
"-e", "GITHUB_TOKEN", // optional: pass-through from env below
"-e", "SYNC_ENABLED", // optional: pass-through from env below
"-e", "SYNC_INTERVAL", // optional: pass-through from env below
"n24q02m/wet-mcp:latest"
],
"env": {
// -- optional: LiteLLM Proxy (production, selfhosted gateway)
// "LITELLM_PROXY_URL": "http://10.0.0.20:4000",
// "LITELLM_PROXY_KEY": "sk-your-virtual-key",
// -- optional: cloud embedding (Gemini > OpenAI > Cohere) + media analysis
// -- without this, uses built-in local Qwen3-Embedding-0.6B + Qwen3-Reranker-0.6B (ONNX, CPU)
"API_KEYS": "GOOGLE_API_KEY:AIza...",
// -- optional: custom endpoints (e.g. modalcom-ai-workers on Modal.com)
// "EMBEDDING_API_BASE": "https://your-worker.modal.run",
// "EMBEDDING_API_KEY": "your-key",
// "RERANK_API_BASE": "https://your-worker.modal.run",
// "RERANK_API_KEY": "your-key",
// -- optional: higher rate limits for docs discovery (60 -> 5000 req/hr)
// -- auto-detected from `gh auth token` if GitHub CLI is installed
// "GITHUB_TOKEN": "ghp_...",
// -- optional: sync indexed docs across machines via rclone
"SYNC_ENABLED": "true", // optional, default: false
"SYNC_INTERVAL": "300" // optional, auto-sync every 5min (0 = manual only)
}
}
}
}Pre-install (optional)
Pre-download all dependencies before adding to your MCP client config. This avoids slow first-run startup:
# Pre-download SearXNG, Playwright, embedding model (~570MB), and reranker model (~570MB)
uvx --python 3.13 wet-mcp warmup
# With cloud embedding (validates API key, skips local download if cloud works)
API_KEYS="GOOGLE_API_KEY:AIza..." uvx --python 3.13 wet-mcp warmupSync setup
Sync is fully automatic. Just set SYNC_ENABLED=true and the server handles everything:
First sync: rclone is auto-downloaded, a browser opens for OAuth authentication
Token saved: OAuth token is stored locally at
~/.wet-mcp/tokens/(600 permissions)Subsequent runs: Token is loaded automatically — no manual steps needed
For non-Google Drive providers, set SYNC_PROVIDER and SYNC_REMOTE:
{
"SYNC_ENABLED": "true",
"SYNC_PROVIDER": "dropbox", // rclone provider type
"SYNC_REMOTE": "dropbox" // rclone remote name
}Advanced: You can also run
uvx --python 3.13 wet-mcp setup-sync driveto pre-authenticate before first use, but this is optional.
Tools
Tool | Actions | Description |
| search, research, docs | Web search, academic research, library documentation |
| extract, crawl, map | Content extraction, deep crawling, site mapping |
| list, download, analyze | Media discovery & download |
| status, set, cache_clear, docs_reindex | Server configuration and cache management |
| - | Full documentation for any tool |
Usage Examples
// search tool
{"action": "search", "query": "python web scraping", "max_results": 10}
{"action": "research", "query": "transformer attention mechanism"}
{"action": "docs", "query": "how to create routes", "library": "fastapi"}
{"action": "docs", "query": "dependency injection", "library": "spring-boot", "language": "java"}
// extract tool
{"action": "extract", "urls": ["https://example.com"]}
{"action": "crawl", "urls": ["https://docs.python.org"], "depth": 2}
{"action": "map", "urls": ["https://example.com"]}
// media tool
{"action": "list", "url": "https://github.com/python/cpython"}
{"action": "download", "media_urls": ["https://example.com/image.png"]}Configuration
Variable | Default | Description |
|
| Auto-start embedded SearXNG subprocess |
|
| SearXNG port (optional) |
|
| External SearXNG URL (optional, when auto disabled) |
|
| SearXNG request timeout in seconds (optional) |
| - | LiteLLM Proxy URL (e.g. |
| - | LiteLLM Proxy virtual key (e.g. |
| - | LLM API keys for SDK mode (format: |
|
| LiteLLM model for media analysis (optional) |
| - | Custom LLM endpoint URL (optional, for SDK mode) |
| - | Custom LLM endpoint key (optional) |
| - | Custom embedding endpoint URL (optional, for SDK mode) |
| - | Custom embedding endpoint key (optional) |
| - | Custom rerank endpoint URL (optional, for SDK mode) |
| - | Custom rerank endpoint key (optional) |
| (auto-detect) |
|
| (auto-detect) | LiteLLM embedding model (optional) |
|
| Embedding dimensions (optional) |
|
| Enable reranking after search |
| (auto-detect) |
|
| (auto-detect) | LiteLLM rerank model (auto: |
|
| Return top N results after reranking |
|
| Data directory for cache DB, docs DB, downloads (optional) |
|
| Docs database location (optional) |
|
| Media download directory (optional) |
|
| Tool execution timeout in seconds, 0=no timeout (optional) |
|
| Enable/disable web cache (optional) |
| - | GitHub personal access token for library discovery (optional, increases rate limit from 60 to 5000 req/hr). Auto-detected from |
|
| Enable rclone sync |
|
| rclone provider type (drive, dropbox, s3, etc.) |
|
| rclone remote name |
|
| Remote folder name |
|
| Auto-sync interval in seconds (0=manual) |
|
| Logging level |
Embedding & Reranking
Both embedding and reranking are always available — local models are built-in and require no configuration.
Embedding: Default local Qwen3-Embedding-0.6B. Set
API_KEYSto upgrade to cloud (Gemini > OpenAI > Cohere), with automatic local fallback if cloud fails.Reranking: Default local Qwen3-Reranker-0.6B. If
COHERE_API_KEYis present inAPI_KEYS, auto-upgrades to cloudcohere/rerank-multilingual-v3.0.GPU auto-detection: If GPU is available (CUDA/DirectML) and
llama-cpp-pythonis installed, automatically uses GGUF models (~480MB) instead of ONNX (~570MB) for better performance.All embeddings stored at 768 dims (default). Switching providers never breaks the vector table.
Override with
EMBEDDING_BACKEND=localto force local even with API keys.
API_KEYS supports multiple providers in a single string:
API_KEYS=GOOGLE_API_KEY:AIza...,OPENAI_API_KEY:sk-...,COHERE_API_KEY:co-...LLM Configuration (3-Mode Architecture)
LLM access (for media analysis) supports 3 modes, resolved by priority:
Priority | Mode | Config | Use case |
1 | Proxy |
| Production (OCI VM, selfhosted gateway) |
2 | SDK |
| Dev/local with direct API access |
3 | Local | Nothing needed | Offline, embedding/rerank only (no LLM) |
No cross-mode fallback — if proxy is configured but unreachable, calls fail (no silent fallback to direct API).
SearXNG Configuration (2-Mode)
Web search is powered by SearXNG, a privacy-respecting metasearch engine.
Mode | Config | Description |
Embedded (default) |
| Auto-installs and manages SearXNG as subprocess. Zero config needed. |
External |
| Connects to pre-existing SearXNG instance (e.g. Docker container, shared server). |
Embedded mode is best for local development and single-user deployments. On first run, wet-mcp automatically downloads and configures SearXNG.
External mode is recommended when:
Running in Docker (use a separate SearXNG container)
Sharing a SearXNG instance across multiple services
SearXNG is already deployed on your infrastructure
Architecture
┌─────────────────────────────────────────────────────────┐
│ MCP Client │
│ (Claude, Cursor, Windsurf) │
└─────────────────────┬───────────────────────────────────┘
│ MCP Protocol
v
┌─────────────────────────────────────────────────────────┐
│ WET MCP Server │
│ ┌──────────┐ ┌──────────┐ ┌───────┐ ┌────────┐ │
│ │ search │ │ extract │ │ media │ │ config │ │
│ │ (search, │ │(extract, │ │(list, │ │(status,│ │
│ │ research,│ │ crawl, │ │downld,│ │ set, │ │
│ │ docs) │ │ map) │ │analyz)│ │ cache) │ │
│ └──┬───┬───┘ └────┬─────┘ └──┬────┘ └────────┘ │
│ │ │ │ │ + help tool │
│ v v v v │
│ ┌──────┐ ┌──────┐ ┌──────────┐ ┌──────────┐ │
│ │SearX │ │DocsDB│ │ Crawl4AI │ │ Reranker │ │
│ │NG │ │FTS5+ │ │(Playwrgt)│ │(LiteLLM/ │ │
│ │ │ │sqlite│ │ │ │ Qwen3 │ │
│ │ │ │-vec │ │ │ │ local) │ │
│ └──────┘ └──────┘ └──────────┘ └──────────┘ │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ WebCache (SQLite, TTL) │ rclone sync (docs) │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘Build from Source
git clone https://github.com/n24q02m/wet-mcp
cd wet-mcp
# Setup (requires mise: https://mise.jdx.dev/)
mise run setup
# Run
uv run wet-mcpDocker Build
docker build -t n24q02m/wet-mcp:latest .Requirements: Python 3.13 (not 3.14+)
Compatible With
Also by n24q02m
Server | Description | Install |
Notion API for AI agents |
| |
Persistent AI memory with hybrid search |
| |
Email (IMAP/SMTP) for AI agents |
| |
Godot Engine for AI agents |
|
Related Projects
modalcom-ai-workers — GPU-accelerated AI workers on Modal.com (embedding, reranking)
qwen3-embed — Local embedding/reranking library used by wet-mcp
Contributing
See CONTRIBUTING.md
License
MIT - See LICENSE