Enables web search through Brave Search engine as part of a multi-engine search strategy, using browser-based automation with dedicated Firefox instance for retrieving search results and page content.
Provides web search capabilities through DuckDuckGo as a fallback search engine, using HTTP requests to retrieve search results and snippets when other search engines are unavailable.
Web Search MCP Server for use with Local LLMs
A TypeScript MCP (Model Context Protocol) server that provides comprehensive web search capabilities using direct connections (no API keys required) with multiple tools for different use cases.
Features
Multi-Engine Web Search: Prioritises Bing > Brave > DuckDuckGo for optimal reliability and performance
Full Page Content Extraction: Fetches and extracts complete page content from search results
Multiple Search Tools: Three specialised tools for different use cases
Smart Request Strategy: Switches between playwright browesrs and fast axios requests to ensure results are returned
Concurrent Processing: Extracts content from multiple pages simultaneously
How It Works
The server provides three specialised tools for different web search needs:
1. full-web-search (Main Tool)
When a comprehensive search is requested, the server uses an optimised search strategy:
Browser-based Bing Search - Primary method using dedicated Chromium instance
Browser-based Brave Search - Secondary option using dedicated Firefox instance
Axios DuckDuckGo Search - Final fallback using traditional HTTP
Dedicated browser isolation: Each search engine gets its own browser instance with automatic cleanup
Content extraction: Tries axios first, then falls back to browser with human behavior simulation
Concurrent processing: Extracts content from multiple pages simultaneously with timeout protection
HTTP/2 error recovery: Automatically falls back to HTTP/1.1 when protocol errors occur
2. get-web-search-summaries (Lightweight Alternative)
For quick search results without full content extraction:
Performs the same optimised multi-engine search as
full-web-searchReturns only the search result snippets/descriptions
Does not follow links to extract full page content
3. get-single-web-page-content (Utility Tool)
For extracting content from a specific webpage:
Takes a single URL as input
Follows the URL and extracts the main page content
Removes navigation, ads, and other non-content elements
Compatibility
This MCP server has been developed and tested with LM Studio and LibreChat. It has not been tested with other MCP clients.
Model Compatibility
Important: Prioritise using more recent models designated for tool use.
Older models (even those with tool use specified) may not work or may work erratically. This seems to be the case with Llama and Deepseek. Qwen3 and Gemma 3 currently have the best restults.
✅ Works well with: Qwen3
✅ Works well with: Gemma 3
✅ Works with: Llama 3.2
✅ Works with: Recent Llama 3.1 (e.g 3.1 swallow-8B)
✅ Works with: Recent Deepseek R1 (e.g 0528 works)
⚠️ May have issues with: Some versions of Llama and Deepseek R1
❌ May not work with: Older versions of Llama and Deepseek R1
Installation (Recommended)
Requirements:
Node.js 18.0.0 or higher
npm 8.0.0 or higher
Download the latest release zip file from the Releases page
Extract the zip file to a location on your system (e.g.,
~/mcp-servers/web-search-mcp/)Open a terminal in the extracted folder and run:
npm install npx playwright install npm run buildThis will create a
node_modulesfolder with all required dependencies, install Playwright browsers, and build the project.Note: You must run
npm installin the root of the extracted folder (not indist/).Configure your
mcp.jsonto point to the extracteddist/index.jsfile:
Example paths:
macOS/Linux:
~/mcp-servers/web-search-mcp/dist/index.jsWindows:
C:\\mcp-servers\\web-search-mcp\\dist\\index.js
In LibreChat, you can include the MCP server in the librechat.yaml. If you are running LibreChat in Docker, you must first mount your local directory in docker-compose.override.yml.
in docker-compose.override.yml:
in librechat.yaml:
Troubleshooting:
If
npm installfails, try updating Node.js to version 18+ and npm to version 8+If
npm run buildfails, ensure you have the latest Node.js version installedFor older Node.js versions, you may need to use an older release of this project
Content Length Issues: If you experience odd behavior due to content length limits, try setting
"MAX_CONTENT_LENGTH": "10000", or another value, in yourmcp.jsonenvironment variables:
Environment Variables
The server supports several environment variables for configuration:
MAX_CONTENT_LENGTH: Maximum content length in characters (default: 500000)DEFAULT_TIMEOUT: Default timeout for requests in milliseconds (default: 6000)MAX_BROWSERS: Maximum number of browser instances to maintain (default: 3)BROWSER_TYPES: Comma-separated list of browser types to use (default: 'chromium,firefox', options: chromium, firefox, webkit)BROWSER_FALLBACK_THRESHOLD: Number of axios failures before using browser fallback (default: 3)
Search Quality and Engine Selection
ENABLE_RELEVANCE_CHECKING: Enable/disable search result quality validation (default: true)RELEVANCE_THRESHOLD: Minimum quality score for search results (0.0-1.0, default: 0.3)FORCE_MULTI_ENGINE_SEARCH: Try all search engines and return best results (default: false)DEBUG_BROWSER_LIFECYCLE: Enable detailed browser lifecycle logging for debugging (default: false)
Troubleshooting
Slow Response Times
Optimised timeouts: Default timeout reduced to 6 seconds with concurrent processing for faster results
Concurrent extraction: Content is now extracted from multiple pages simultaneously
Reduce timeouts further: Set
DEFAULT_TIMEOUT=4000for even faster responses (may reduce success rate)Use fewer browsers: Set
MAX_BROWSERS=1to reduce memory usage
Search Failures
Check browser installation: Run
npx playwright installto ensure browsers are availableTry headless mode: Ensure
BROWSER_HEADLESS=true(default) for server environmentsNetwork restrictions: Some networks block browser automation - try different network or VPN
HTTP/2 issues: The server automatically handles HTTP/2 protocol errors with fallback to HTTP/1.1
Search Quality Issues
Enable quality checking: Set
ENABLE_RELEVANCE_CHECKING=true(enabled by default)Adjust quality threshold: Set
RELEVANCE_THRESHOLD=0.5for stricter quality requirementsForce multi-engine search: Set
FORCE_MULTI_ENGINE_SEARCH=trueto try all engines and return the best results
Memory Usage
Automatic cleanup: Browsers are automatically cleaned up after each operation to prevent memory leaks
Limit browsers: Reduce
MAX_BROWSERS(default: 3)EventEmitter warnings: Fixed - browsers are properly closed to prevent listener accumulation
For Development
Development
MCP Tools
This server provides three specialised tools for different web search needs:
1. full-web-search (Main Tool)
The most comprehensive web search tool that:
Takes a search query and optional number of results (1-10, default 5)
Performs a web search (tries Bing, then Brave, then DuckDuckGo if needed)
Fetches full page content from each result URL with concurrent processing
Returns structured data with search results and extracted content
Enhanced reliability: HTTP/2 error recovery, reduced timeouts, and better error handling
Example Usage:
2. get-web-search-summaries (Lightweight Alternative)
A lightweight alternative for quick search results:
Takes a search query and optional number of results (1-10, default 5)
Performs the same optimised multi-engine search as
full-web-searchReturns only search result snippets/descriptions (no content extraction)
Faster and more efficient for quick research
Example Usage:
3. get-single-web-page-content (Utility Tool)
A utility tool for extracting content from a specific webpage:
Takes a single URL as input
Follows the URL and extracts the main page content
Removes navigation, ads, and other non-content elements
Useful for getting detailed content from a known webpage
Example Usage:
Standalone Usage
You can also run the server directly:
Documentation
See API.md for complete technical details.
License
MIT License - see LICENSE for details.
Feedback
This is an open source project and we welcome feedback! If you encounter any issues or have suggestions for improvements, please:
Open an issue on GitHub
Submit a pull request