Used for making async HTTP requests to web pages, enabling efficient fetching of web content
Blocks javascript: URL schemes as part of the security features to prevent potential security vulnerabilities
Built with modern Python async patterns for high performance, requiring Python 3.12 or higher to run
Supports processing XML content types, allowing extraction of text from XML-based web pages
๐ WebSurfer MCP
A powerful Model Context Protocol (MCP) server that enables Large Language Models (LLMs) to fetch and extract readable text content from web pages. This tool provides a secure, efficient, and feature-rich way for AI assistants to access web content through a standardized interface.
โจ Features
๐ Secure URL Validation: Blocks dangerous schemes, private IPs, and localhost domains
๐ Smart Content Extraction: Extracts clean, readable text from HTML pages using advanced parsing
โก Rate Limiting: Built-in rate limiting to prevent abuse (60 requests/minute)
๐ก๏ธ Content Type Filtering: Only processes supported content types (HTML, plain text, XML)
๐ Size Limits: Configurable content size limits (default: 10MB)
โฑ๏ธ Timeout Management: Configurable request timeouts with validation
๐ง Comprehensive Error Handling: Detailed error messages for various failure scenarios
๐งช Full Test Coverage: 45+ unit tests covering all functionality
Related MCP server: MCP Toolkit
๐๏ธ Architecture
The project consists of several key components:
Core Components
MCPURLSearchServer: Main MCP server implementationTextExtractor: Handles web content fetching and text extractionURLValidator: Validates and sanitizes URLs for securityConfig: Centralized configuration management
Key Features
Async/Await: Built with modern Python async patterns for high performance
Resource Management: Proper cleanup of network connections and resources
Context Managers: Safe resource handling with automatic cleanup
Logging: Comprehensive logging for debugging and monitoring
๐ Installation
Prerequisites
Python 3.12 or higher
uv package manager (recommended)
Quick Start
Clone the repository:
git clone https://github.com/crybo-rybo/websurfer-mcp cd websurfer-mcpInstall dependencies:
uv syncVerify installation:
uv run python -c "import mcp_url_search_server; print('Installation successful!')"
๐ฏ Usage
Starting the MCP Server
The server communicates via stdio (standard input/output) and can be integrated with any MCP-compatible client.
Testing URL Search Functionality
Test the URL search functionality directly:
Example Test Output
๐ ๏ธ Configuration
The server can be configured using environment variables:
Variable | Default | Description |
|
| Default request timeout in seconds |
|
| Maximum allowed timeout in seconds |
|
| User agent string for requests |
|
| Maximum content size in bytes (10MB) |
Example Configuration
๐งช Testing
Running All Tests
Running Specific Test Files
Test Results
All 45 tests should pass successfully:
๐ง Development
Project Structure
๐ Security Features
URL Validation
Scheme Blocking: Blocks
file://,javascript:,ftp://schemesPrivate IP Protection: Blocks access to private IP ranges (10.x.x.x, 192.168.x.x, etc.)
Localhost Protection: Blocks localhost and local domain access
URL Length Limits: Prevents extremely long URLs
Format Validation: Ensures proper URL structure
Content Safety
Content Type Filtering: Only processes supported text-based content types
Size Limits: Configurable maximum content size (default: 10MB)
Rate Limiting: Prevents abuse with configurable limits
Timeout Protection: Configurable request timeouts
๐ Performance
Async Processing: Non-blocking I/O for high concurrency
Connection Pooling: Efficient HTTP connection reuse
DNS Caching: Reduces DNS lookup overhead
Resource Cleanup: Automatic cleanup prevents memory leaks
๐ Acknowledgments
Built with the Model Context Protocol (MCP)
Uses aiohttp for async HTTP requests
Leverages trafilatura for content extraction
Powered by BeautifulSoup for HTML parsing
Happy web surfing with your AI assistant! ๐