Skip to main content
Glama
n24q02m

WET - Web Extended Toolkit

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
API_KEYSNoLLM API keys for SDK mode (format: ENV_VAR:key,...)
CACHE_DIRNoData directory for cache DB, docs DB, downloads (optional)~/.wet-mcp
LOG_LEVELNoLogging levelINFO
WET_CACHENoEnable/disable web cache (optional)true
LLM_MODELSNoLiteLLM model for media analysis (optional)gemini/gemini-3-flash-preview
LLM_API_KEYNoCustom LLM endpoint key (optional)
SEARXNG_URLNoExternal SearXNG URL (optional, when auto disabled)http://localhost:41592
SYNC_FOLDERNoRemote folder namewet-mcp
SYNC_REMOTENorclone remote namegdrive
DOCS_DB_PATHNoDocs database location (optional)~/.wet-mcp/docs.db
DOWNLOAD_DIRNoMedia download directory (optional)~/.wet-mcp/downloads
GITHUB_TOKENNoGitHub personal access token for library discovery (optional, increases rate limit from 60 to 5000 req/hr). Auto-detected from `gh auth token` if GitHub CLI is installed.
LLM_API_BASENoCustom LLM endpoint URL (optional, for SDK mode)
RERANK_MODELNoLiteLLM rerank model (auto: cohere/rerank-multilingual-v3.0 if Cohere key in API_KEYS)
RERANK_TOP_NNoReturn top N results after reranking10
SYNC_ENABLEDNoEnable rclone syncfalse
TOOL_TIMEOUTNoTool execution timeout in seconds, 0=no timeout (optional)120
SYNC_INTERVALNoAuto-sync interval in seconds (0=manual)300
SYNC_PROVIDERNorclone provider type (drive, dropbox, s3, etc.)drive
EMBEDDING_DIMSNoEmbedding dimensions (optional, default auto=768)0
RERANK_API_KEYNoCustom rerank endpoint key (optional)
RERANK_BACKENDNolitellm or local. Auto: Cohere key in API_KEYS -> litellm, else local
RERANK_ENABLEDNoEnable reranking after searchtrue
EMBEDDING_MODELNoLiteLLM embedding model (optional)
RERANK_API_BASENoCustom rerank endpoint URL (optional, for SDK mode)
SEARXNG_TIMEOUTNoSearXNG request timeout in seconds (optional)30
WET_AUTO_SEARXNGNoAuto-start embedded SearXNG subprocesstrue
WET_SEARXNG_PORTNoSearXNG port (optional)41592
EMBEDDING_API_KEYNoCustom embedding endpoint key (optional)
EMBEDDING_BACKENDNolitellm (cloud API) or local (Qwen3). Auto: API_KEYS -> litellm, else local
LITELLM_PROXY_KEYNoLiteLLM Proxy virtual key (e.g. sk-...)
LITELLM_PROXY_URLNoLiteLLM Proxy URL (e.g. http://10.0.0.20:4000). Enables proxy mode
EMBEDDING_API_BASENoCustom embedding endpoint URL (optional, for SDK mode)

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
search

Search the web, academic papers, or library documentation.

  • search: Web search via SearXNG (requires query)

  • research: Academic/scientific search (requires query)

  • docs: Search library documentation with auto-indexing (requires library + query, specify language for disambiguation) Use help tool for full documentation.

extract

Extract content from web pages, crawl sites, or map site structure.

  • extract: Get clean content from URLs (requires urls)

  • crawl: Deep crawl from root URLs (requires urls)

  • map: Discover site structure without content (requires urls) Use help tool for full documentation.

media

Media discovery and download.

  • list: Scan page, return URLs + metadata

  • download: Download specific files to local

  • analyze: Analyze a local media file using configured LLM (requires API_KEYS)

Note: Downloading is intended for downstream analysis (e.g., passing to an LLM or vision model). The MCP server provides the raw files; the MCP client orchestrates the analysis.

Use help tool for full documentation.

help

Get full documentation for a tool. Use when compressed descriptions are insufficient. Valid tool names: search, extract, media, config, help.

config

Server config and management. Actions: status|set|cache_clear|docs_reindex. Use help tool with tool_name='config' for full docs.

Prompts

Interactive templates invoked by user choice

NameDescription
research_topicGenerate a prompt to research a topic using academic search.
library_docsGenerate a prompt to find library documentation.

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/n24q02m/wet-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server