Skip to main content
Glama
README.md•9.56 kB
# šŸ” Presearch MCP Server [![Presearch](https://img.shields.io/badge/Presearch-Decentralized%20Search-blue?logo=presearch)](https://presearch.io/) [![MCP](https://img.shields.io/badge/MCP-Model%20Context%20Protocol-green)](https://modelcontextprotocol.io/) [![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![npm version](https://badge.fury.io/js/presearch-mcp-server.svg)](https://badge.fury.io/js/presearch-mcp-server) [![Node.js](https://img.shields.io/badge/node-%3E%3D20.0.0-brightgreen)](https://nodejs.org/) [![MCP Compatible](https://img.shields.io/badge/MCP-Compatible-blue)](https://modelcontextprotocol.io) [![Powered by Presearch](https://img.shields.io/badge/Powered%20by-Presearch-blue)](https://presearch.io) > **Privacy-first search integration for AI assistants** - A professional Model Context Protocol (MCP) server that seamlessly integrates Presearch's decentralized search API with AI applications. ## 🌟 Overview Presearch MCP Server provides a robust, privacy-focused bridge between AI assistants and the Presearch decentralized search engine. Built with enterprise-grade architecture, it offers intelligent caching, multi-format exports, and comprehensive web scraping capabilities while maintaining user privacy and data security. ### šŸ”— Quick Links - **Repository**: [GitHub](https://github.com/NosytLabs/presearch-search-api-mcp) - **Smithery**: [Smithery.ai](https://smithery.ai/server/@NosytLabs/presearch-search-api-mcp) - **Presearch API**: [Developer Portal](https://presearch.io/searchapi) - **Issues**: [Support](https://github.com/NosytLabs/presearch-search-api-mcp/issues) ## ✨ Key Features ### šŸ›”ļø Privacy & Security - **Decentralized Search**: Leverages Presearch's distributed network for private, uncensored search results - **No Tracking**: Zero user data collection or behavioral tracking - **Secure Authentication**: Bearer token authentication with configurable API endpoints ### šŸ”§ AI Integration - **MCP Protocol Compliance**: Full Model Context Protocol implementation for seamless AI assistant integration - **Dual Transport Support**: Both HTTP and STDIO transport modes - **Smithery.ai Ready**: Pre-configured for instant deployment on Smithery platform ### šŸš€ Performance & Reliability - **Intelligent Caching**: Redis-compatible caching with configurable TTL and key limits - **Rate Limiting**: Built-in request throttling and retry mechanisms - **Error Handling**: Comprehensive error handling with detailed logging - **Health Monitoring**: Built-in health checks and performance metrics ### šŸ“Š Data Export & Analysis - **Multi-Format Support**: JSON, CSV, Markdown, HTML, and PDF export formats - **Content Scraping**: Intelligent web content extraction with metadata preservation - **Batch Processing**: Bulk search and scraping operations - **Quality Scoring**: Advanced result ranking and quality assessment ## šŸš€ Quick Start ### Prerequisites - Node.js ≄20.0.0 - Presearch API key ([Get yours here](https://presearch.io/searchapi)) - Docker (optional, for containerized deployment) ### Installation ```bash # Clone the repository git clone https://github.com/NosytLabs/presearch-search-api-mcp.git cd presearch-search-api-mcp # Install dependencies npm install # Configure environment cp .env.example .env ``` ### Configuration Edit `.env` file with your settings: ```env # Required: Your Presearch API key PRESEARCH_API_KEY=your_presearch_api_key_here # Optional: API configuration PRESEARCH_BASE_URL=https://na-us-1.presearch.com # Optional: Server configuration PORT=3000 NODE_ENV=production # Optional: Cache settings CACHE_ENABLED=true CACHE_TTL=3600 CACHE_MAX_KEYS=1000 # Optional: Logging LOG_LEVEL=info ``` ### Start the Server **HTTP Mode** (for remote connections): ```bash npm start ``` **STDIO Mode** (for local AI assistants): ```bash npm run start:stdio ``` **Development Mode** (with auto-reload): ```bash npm run dev ``` **Docker Mode** (for containerized deployment): ```bash docker build -t presearch-mcp-server . docker run -p 3000:3000 -v ./.env:/app/.env presearch-mcp-server ``` ## šŸ”Œ MCP Configuration ### HTTP Transport Configuration Add to your MCP client configuration: ```json { "mcpServers": { "presearch": { "transport": "http", "url": "http://localhost:3000/mcp", "headers": { "Authorization": "Bearer your-api-key" } } } } ``` ### STDIO Transport Configuration ```json { "mcpServers": { "presearch": { "transport": "stdio", "command": "node", "args": ["src/index.js", "--stdio"], "env": { "PRESEARCH_API_KEY": "your-api-key", "NODE_ENV": "production" } } } } ``` ## šŸ› ļø Available Tools ### Search Operations - **`presearch_ai_search`** - Perform intelligent web searches with advanced filtering - **`presearch_search_and_scrape`** - Search and automatically scrape top results - **`export_search_results`** - Export search results in multiple formats ### Content Extraction - **`scrape_url_content`** - Extract content from specified URLs - **`export_site_content`** - Export scraped content with formatting - **`content_analysis`** - Analyze content quality and relevance ### System Management - **`presearch_health_check`** - Verify API connectivity and authentication - **`cache_stats`** - View cache performance metrics - **`cache_clear`** - Clear cached data ## šŸ“– Usage Examples ### Basic Search ```json { "tool": "presearch_ai_search", "params": { "query": "decentralized finance trends 2024", "count": 10, "safesearch": "moderate", "freshness": "week" } } ``` ### Advanced Search with Scraping ```json { "tool": "presearch_search_and_scrape", "params": { "query": "blockchain technology adoption", "count": 5, "scrape_count": 3, "lang": "en-US", "file_output": true, "export_format": "markdown" } } ``` ### Content Scraping ```json { "tool": "scrape_url_content", "params": { "urls": [ "https://presearch.io", "https://example.com/article" ], "include_metadata": true, "export_format": "json" } } ``` ### Exporting Search Results ```json { "tool": "export_search_results", "params": { "query": "latest advancements in AI", "count": 20, "export_format": "csv", "file_path": "/path/to/save/results.csv" } } ``` ## šŸ” Presearch API Authentication ### Obtaining API Keys 1. Visit [Presearch Developer Portal](https://presearch.io/searchapi) 2. Create a developer account 3. Generate your API key (Bearer token) 4. Add to your `.env` file: `PRESEARCH_API_KEY=your_key_here` ### Authentication Headers The server automatically handles authentication using the `Authorization: Bearer <token>` header format as specified in the Presearch API documentation. ### API Endpoints - **Base URL**: `https://na-us-1.presearch.com/v1/search` - **Authentication**: Bearer token in Authorization header - **Rate Limits**: Configurable per your Presearch plan ## 🐳 Docker Deployment ### Quick Docker Setup ```bash # Build the image docker build -t presearch-mcp . # Run with environment variables docker run -p 3000:3000 -e PRESEARCH_API_KEY=your_key_here presearch-mcp ``` ### Docker Compose ```yaml version: '3.8' services: presearch-mcp: build: . ports: - "3000:3000" environment: - PRESEARCH_API_KEY=your_key_here - NODE_ENV=production volumes: - ./.env:/app/.env ``` ## šŸ”§ Development ### Project Structure ``` presearch-search-api-mcp/ ā”œā”€ā”€ src/ │ ā”œā”€ā”€ index.js # Main server entry │ ā”œā”€ā”€ tools/ # MCP tool implementations │ ā”œā”€ā”€ utils/ # Utility functions │ └── config/ # Configuration files ā”œā”€ā”€ tests/ # Test files ā”œā”€ā”€ docs/ # Documentation └── exports/ # Generated exports ``` ### Available Scripts - `npm start` - Start HTTP server - `npm run start:stdio` - Start STDIO server - `npm run dev` - Development mode with auto-reload - `npm run lint` - Run ESLint - `npm run format` - Format code with Prettier ### Testing ```bash # Run tests npm test # Run with coverage npm run test:coverage ``` ## šŸ“Š Performance & Monitoring ### Cache Metrics - Hit/miss ratios - Response time tracking - Memory usage monitoring ### Health Checks - API connectivity verification - Authentication validation - System resource monitoring ## šŸ”’ Security Features ### Data Protection - No user data persistence - Encrypted API communications - Secure token handling ### Rate Limiting - Configurable request limits - Automatic retry mechanisms - DDoS protection ## šŸ“ License MIT License - see [LICENSE](LICENSE) file for details. ## šŸ¤ Contributing 1. Fork the repository 2. Create a feature branch 3. Make your changes 4. Add tests 5. Submit a pull request ## šŸ› Support - **Issues**: [GitHub Issues](https://github.com/NosytLabs/presearch-search-api-mcp/issues) - **Documentation**: [Wiki](https://github.com/NosytLabs/presearch-search-api-mcp/wiki) - **Discussions**: [GitHub Discussions](https://github.com/NosytLabs/presearch-search-api-mcp/discussions) ## 🌟 Star History [![Star History Chart](https://api.star-history.com/svg?repos=NosytLabs/presearch-search-api-mcp&type=Date)](https://star-history.com/#NosytLabs/presearch-search-api-mcp&Date) --- **Built with ā¤ļø for the privacy-conscious AI community**

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NosytLabs/presearch-search-api-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server