Skip to main content
Glama

websearch-mcp

An MCP server that provides web search and page fetching tools for AI agents. Uses SearXNG for search, Crawl4AI for content extraction, and any OpenAI-compatible LLM for server-side synthesis.

Prerequisites

  • Python 3.12+

  • SearXNG instance with JSON format enabled (search.formats: [json] in settings.yml)

  • OpenAI-compatible LLM endpoint (OpenAI, Ollama, vLLM, LiteLLM, etc.)

Installation

# Run directly from GitHub
uvx --from "git+https://github.com/<org>/websearch-mcp" websearch-mcp

# Or clone and install locally
git clone https://github.com/<org>/websearch-mcp
cd websearch-mcp
uv sync
uv run websearch-mcp

Tools

Search the web via SearXNG, fetch top result pages, and synthesize with LLM.

Parameter

Type

Required

Description

query

string

Yes

Search query

max_results

int

No

Max results (default: 10)

allowed_domains

string[]

No

Only include these domains

blocked_domains

string[]

No

Exclude these domains

webfetch

Fetch a single URL, extract content, and process with LLM.

Parameter

Type

Required

Description

url

string

Yes

URL to fetch

prompt

string

No

Custom instruction for LLM processing

image-description

Describe an image using a vision language model (VLM). Accepts either base64-encoded image data or an absolute filesystem path to an image file.

Parameter

Type

Required

Description

image

string

Yes

Base64-encoded image data or absolute filesystem path

Returns a JSON object with description, success status, and optional error message.

Environment Variables

Variable

Required

Default

Description

SEARXNG_URL

Yes

Base URL of SearXNG instance

LLM_BASE_URL

Yes

OpenAI-compatible endpoint base URL

LLM_API_KEY

Yes

API key for the LLM endpoint

LLM_MODEL

Yes

Model name for chat completions

CACHE_TTL_SECONDS

No

900

Cache TTL in seconds (0 to disable)

CACHE_MAX_ENTRIES

No

1000

Max cache entries before LRU eviction

FETCH_TIMEOUT

No

30

Per-page fetch timeout in seconds

LLM_TIMEOUT

No

60

LLM request timeout in seconds

MAX_CONTENT_SIZE

No

5242880

Max content size in bytes (5MB)

DEFAULT_MAX_RESULTS

No

10

Default result count for web_search

VLM Configuration (for image-description tool)

Variable

Required

Default

Description

VLM_BASE_URL

No

LLM_BASE_URL

OpenAI-compatible endpoint for VLM

VLM_API_KEY

No

LLM_API_KEY

API key for VLM endpoint

VLM_MODEL

No

LLM_MODEL

Model name for image description

MAX_IMAGE_SIZE

No

10485760

Max image size in bytes (10MB)

Agent Configuration

Claude Desktop (stdio)

{
  "mcpServers": {
    "websearch": {
      "command": "uvx",
      "args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
      "env": {
        "SEARXNG_URL": "http://localhost:8888",
        "LLM_BASE_URL": "http://localhost:11434/v1",
        "LLM_API_KEY": "ollama",
        "LLM_MODEL": "llama3"
      }
    }
  }
}

Generic MCP Config (stdio)

{
  "command": "uvx",
  "args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
  "env": {
    "SEARXNG_URL": "http://localhost:8888",
    "LLM_BASE_URL": "https://api.openai.com/v1",
    "LLM_API_KEY": "sk-...",
    "LLM_MODEL": "gpt-4o-mini"
  }
}

HTTP Transport

websearch-mcp --transport http --port 3000
{
  "url": "http://localhost:3000/mcp"
}

Development

uv sync
uv run pytest tests/ -v

Example Usage

image-description tool

With base64-encoded image:

# Using base64 encoded image data
image_b64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=="
result = await image_description(image_b64)
# Returns: {"description": "A small white square", "success": true, "error": null}

With filesystem path:

# Using absolute filesystem path
result = await image_description("/path/to/image.png")
# Returns: {"description": "A detailed description of the image", "success": true, "error": null}

With Ollama (using llava or other VLM):

{
  "mcpServers": {
    "websearch": {
      "command": "uvx",
      "args": ["--from", "git+https://github.com/<org>/websearch-mcp", "websearch-mcp"],
      "env": {
        "SEARXNG_URL": "http://localhost:8888",
        "LLM_BASE_URL": "http://localhost:11434/v1",
        "LLM_API_KEY": "ollama",
        "LLM_MODEL": "llama3",
        "VLM_BASE_URL": "http://localhost:11434/v1",
        "VLM_API_KEY": "ollama",
        "VLM_MODEL": "llava"
      }
    }
  }
}
Install Server
A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/roberthamel/websearch-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server