Integrates with Ollama's OpenAI-compatible endpoints to provide LLM-based synthesis of web content and image description capabilities via vision models.
Supports using OpenAI chat and vision models for processing web search results and generating descriptions for images.
Provides web search capabilities by querying SearXNG instances, allowing for result filtering and content synthesis from search result pages.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@websearch-mcpsearch for the latest news about SpaceX Starship and summarize it"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
websearch-mcp
An MCP server that provides web search and page fetching tools for AI agents. Uses SearXNG for search, Crawl4AI for content extraction, and any OpenAI-compatible LLM for server-side synthesis.
Prerequisites
Python 3.12+
SearXNG instance with JSON format enabled (
search.formats: [json]insettings.yml)OpenAI-compatible LLM endpoint (OpenAI, Ollama, vLLM, LiteLLM, etc.)
Installation
# Run directly from GitHub
uvx --from "git+https://github.com/<org>/websearch-mcp" websearch-mcp
# Or clone and install locally
git clone https://github.com/<org>/websearch-mcp
cd websearch-mcp
uv sync
uv run websearch-mcpTools
web_search
Search the web via SearXNG, fetch top result pages, and synthesize with LLM.
Parameter | Type | Required | Description |
| string | Yes | Search query |
| int | No | Max results (default: 10) |
| string[] | No | Only include these domains |
| string[] | No | Exclude these domains |
webfetch
Fetch a single URL, extract content, and process with LLM.
Parameter | Type | Required | Description |
| string | Yes | URL to fetch |
| string | No | Custom instruction for LLM processing |
image-description
Describe an image using a vision language model (VLM). Accepts either base64-encoded image data or an absolute filesystem path to an image file.
Parameter | Type | Required | Description |
| string | Yes | Base64-encoded image data or absolute filesystem path |
Returns a JSON object with description, success status, and optional error message.
Environment Variables
Variable | Required | Default | Description |
| Yes | — | Base URL of SearXNG instance |
| Yes | — | OpenAI-compatible endpoint base URL |
| Yes | — | API key for the LLM endpoint |
| Yes | — | Model name for chat completions |
| No |
| Cache TTL in seconds (0 to disable) |
| No |
| Max cache entries before LRU eviction |
| No |
| Per-page fetch timeout in seconds |
| No |
| LLM request timeout in seconds |
| No |
| Max content size in bytes (5MB) |
| No |
| Default result count for web_search |
VLM Configuration (for image-description tool)
Variable | Required | Default | Description |
| No |
| OpenAI-compatible endpoint for VLM |
| No |
| API key for VLM endpoint |
| No |
| Model name for image description |
| No |
| Max image size in bytes (10MB) |
Agent Configuration
Claude Desktop (stdio)
{
"mcpServers": {
"websearch": {
"command": "uvx",
"args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
"env": {
"SEARXNG_URL": "http://localhost:8888",
"LLM_BASE_URL": "http://localhost:11434/v1",
"LLM_API_KEY": "ollama",
"LLM_MODEL": "llama3"
}
}
}
}Generic MCP Config (stdio)
{
"command": "uvx",
"args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
"env": {
"SEARXNG_URL": "http://localhost:8888",
"LLM_BASE_URL": "https://api.openai.com/v1",
"LLM_API_KEY": "sk-...",
"LLM_MODEL": "gpt-4o-mini"
}
}HTTP Transport
websearch-mcp --transport http --port 3000{
"url": "http://localhost:3000/mcp"
}Development
uv sync
uv run pytest tests/ -vExample Usage
image-description tool
With base64-encoded image:
# Using base64 encoded image data
image_b64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=="
result = await image_description(image_b64)
# Returns: {"description": "A small white square", "success": true, "error": null}With filesystem path:
# Using absolute filesystem path
result = await image_description("/path/to/image.png")
# Returns: {"description": "A detailed description of the image", "success": true, "error": null}With Ollama (using llava or other VLM):
{
"mcpServers": {
"websearch": {
"command": "uvx",
"args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
"env": {
"SEARXNG_URL": "http://localhost:8888",
"LLM_BASE_URL": "http://localhost:11434/v1",
"LLM_API_KEY": "ollama",
"LLM_MODEL": "llama3",
"VLM_BASE_URL": "http://localhost:11434/v1",
"VLM_API_KEY": "ollama",
"VLM_MODEL": "llava"
}
}
}
}Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.