SearXNG MCP Server
Allows searching for scientific papers and articles on the arXiv repository via the SearXNG interface.
Enables web searches and result retrieval through the Brave search engine.
Enables web searches and result retrieval through the DuckDuckGo search engine.
Allows searching for repositories and development-related content on GitHub.
Enables web searches and result retrieval through the Google search engine.
Provides access to scholarly literature, scientific papers, and academic articles through Google Scholar.
Allows searching for Python packages and development content on the Python Package Index (PyPI).
Enables web searches and result retrieval through the Qwant search engine.
Facilitates searching for posts, profiles, and social content on the Reddit platform.
The primary service integration, providing aggregated search capabilities across over 130 engines and multiple categories.
Enables web searches and result retrieval through the Startpage search engine.
Allows searching for and retrieving video content hosted on Vimeo.
Provides access to structured data and knowledge from the Wikidata repository.
Allows searching for and retrieving encyclopedic information from Wikipedia.
Allows searching for and retrieving video content and information from YouTube.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@SearXNG MCP Serversearch for recent news and scientific papers about generative AI"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
SearXNG MCP Server
A Model Context Protocol (MCP) server that provides web search capabilities by integrating with a SearXNG instance.
Features
Web Search: Perform powerful aggregated searches across multiple engines.
Discovery: Programmatically retrieve available categories and engines.
Stateless HTTP: Compatible with any standard JSON-RPC client.
Flexible Configuration: Supports environment variables and command-line arguments.
Example of compose.yml to run SearXNG with MCP server
services:
searxng:
image: searxng/searxng:latest
ports:
- 8080:8080
volumes:
- ./searxng/etc/:/etc/searxng/
- ./searxng/data/:/var/cache/searxng/
restart: always
searxng-mcp:
image: ghcr.io/aicrafted/searxng-mcp:latest
restart: unless-stopped
depends_on:
# Ensure SearXNG starts before the MCP server
- searxng
environment:
SEARXNG_URL: http://searxng:8080
MCP_HOST: 0.0.0.0
MCP_PORT: 32123
MCP_TRANSPORT: "http"
ports:
- "32123:32123"MCP client config
HTTP transport (recommended)
{
"mcpServers": {
"searxng": {
"type": "http",
"url": "http://localhost:32123/mcp"
}
}
}SSE transport
{
"mcpServers": {
"searxng": {
"type": "sse",
"url": "http://localhost:32123/sse"
}
}
}Note: SSE transport uses the
/sseendpoint, not/mcp. HTTP transport uses/mcp.
Prerequisites for run from sources
Python 3.10+
A running SearXNG instance.
Installation
Clone the repository and navigate to the directory.
Install dependencies:
pip install -r requirements.txtSet up your
.envfile (optional):SEARXNG_URL=http://your-searxng-instance:8080 MCP_PORT=32123 MCP_HOST=127.0.0.1
Usage
Run the server using uv or standard python:
python searxng_mcp.py --transport http --port 32123 --searxng http://searx.lanRun with Docker
Build the image:
docker build -t searxng-mcp .Run the container:
docker run -d \ -p 32123:32123 \ -e SEARXNG_URL=http://your-searxng-instance:8080 \ --name searxng-mcp \ searxng-mcp
Transport Options
stdio: Standard input/output (default for some MCP clients).http: Stateless HTTP (streamable-http).sse: Server-Sent Events.
Search Abilities Guide
SearXNG aggregates results from various sources. This guide outlines the capabilities available through the web_search tool.
Search Categories
Categories help refine your search by content type. Use these in the categories parameter (comma-separated).
Category | Description |
| Default web search (Google, Brave, DuckDuckGo, etc.) |
| Image search results |
| Video content from YouTube, Vimeo, etc. |
| Recent news articles |
| Geographical and map information |
| IT-related searches (StackOverflow, GitHub, etc.) |
| Scientific papers and articles (ArXiv, Google Scholar) |
| Torrent and file searches |
| Posts and profiles from social platforms |
Supported Engines
SearXNG can query over 130 engines. Configured engines typically include:
Web: Google, Brave, DuckDuckGo, Qwant, Startpage
Knowledge: Wikipedia, Wikidata
Development: GitHub, StackOverflow, PyPI
Social: Reddit, Twitter/X
Advanced Search Parameters
categories: Filter by specific types (e.g.,news,it).engines: Force specific engines (e.g.,google,wikipedia).language: Specify search language (e.g.,en,es,fr).pageno: Navigate through multiple pages of results.time_range: Filter by date (day,month,year).safesearch: Control content filtering (0=None, 1=Moderate, 2=Strict).
Programmatic Discovery
Use the web_search_info tool to dynamically retrieve the list of enabled categories and engines from your instance.
Windows Troubleshooting
localhost not reachable while Docker container is running
Symptom: http://localhost:<port>/ returns connection refused or hits the wrong service,
but curl from inside the container works fine.
Root cause: WSL2 port relay ghost
WSL2 automatically forwards ports from the Linux VM to the Windows host using wslrelay.exe.
When a process inside WSL listens on a port, WSL creates a relay bound to [::1]:<port>
(IPv6 loopback) on the Windows side.
When that WSL process stops, wslrelay.exe often does not release the port. The relay
entry stays alive as a zombie listener on [::1]:<port>.
Later, when Docker maps a container to the same host port, it binds correctly to
0.0.0.0:<port> — but [::1]:<port> is already taken by the stale relay.
On Windows, localhost resolves to ::1 (IPv6) first. So browser and curl requests to
localhost:<port> hit the dead wslrelay.exe entry instead of the Docker container,
resulting in a connection error or unexpected response.
Connecting via the explicit IPv4 address 127.0.0.1:<port> bypasses the relay and reaches
Docker correctly.
How to diagnose:
# Check what is listening on the port
netstat -ano | findstr :<port>
# Identify the processes
Get-Process -Id <pid1>,<pid2> | Select-Object Id,NameIf you see two entries for the same port — one owned by com.docker.backend and another
by wslrelay — this is the problem.
Workarounds:
Option | Command | Notes |
Use IPv4 directly |
| Immediate, no restart needed |
Restart WSL |
| Kills all stale relays; WSL restarts on next use |
Remap Docker port | Change host port in | Avoids the conflict entirely |
Permanent fix:
After wsl --shutdown, restart the Docker container. The relay will no longer exist and
localhost:<port> will work normally until the same port is reused inside WSL again.
Prevention:
If you regularly run services on the same port both in WSL and in Docker, prefer one of:
Always use Docker for that service, never WSL directly
Use different ports for WSL dev and Docker prod instances
Add
127.0.0.1:<port>:<port>explicit binding indocker-compose.ymlto force IPv4
Related
WSL GitHub issue tracker: search
wslrelay port leak
Appeared in Searches
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/aicrafted/searxng-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server