Provides web search capabilities using Google Search with pagination, time filters, and site-specific filtering, plus AI Mode for AI-generated answers with sources.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP WebSearch Serverfind recent articles about AI advancements in healthcare"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP WebSearch Server
MCP Server for web search and advanced web scraping. Uses SearchAPI.io for Google Search and AI Mode.
Quick Start
Option 1: Stdio (npx)
Option 2: HTTP Server (Streamable HTTP)
Run as HTTP server:
Then configure MCP client with API key in URL or header:
Option A: API key in URL query parameter
Option B: API key in header
Option C: Server-side default key
The server implements MCP Streamable HTTP transport (spec 2025-03-26) with:
Session management with
Mcp-Session-IdheadersSSE streaming for server-initiated messages
Resumability support with
Last-Event-IDPer-user API key support via query param or header
Get your API key at searchapi.io
Tools
Tool | Description |
| Google Search with pagination, time filter, site filter |
| Google AI Mode - get AI-generated answers with sources |
| Advanced scraper with multiple extract modes (text/markdown/structured) |
| Extract links from a webpage with optional filter |
| Scrape up to 5 URLs at once |
Parameters
web_search
query- Search querynum_results- Number of results (1-20, default 10)page- Page number for pagination (1-10)time_period- Filter: last_hour, last_day, last_week, last_month, last_yearsite- Limit to specific site (e.g., "github.com")
ai_search
query- Question or topicimage_url- Optional image URL for visual questionslocation- Location for local queries
web_scrape
url- URL to scrapeselector- CSS selector (optional)extract_mode- text, markdown, or structuredinclude_links- Include links in outputmax_length- Max content length (1000-50000)
get_links
url- URL to extract links fromfilter- Text filter for URLs/anchors
scrape_multiple
urls- Array of URLs (max 5)max_per_page- Max content per page (500-5000)
Deployment
PM2 (Production)
Edit
ecosystem.config.jsand set yourSEARCHAPI_KEYRun:
npm run pm2:startCheck logs:
npm run pm2:logs
Or manually:
Docker
Roadmap
Add Serper.dev provider support
Add SerpAPI provider support
Provider auto-fallback
License
MIT