Skip to main content
Glama

šŸ” Presearch MCP Server

Presearch MCP License npm version Node.js MCP Compatible Powered by Presearch

Privacy-first search integration for AI assistants - A professional Model Context Protocol (MCP) server that seamlessly integrates Presearch's decentralized search API with AI applications.

🌟 Overview

Presearch MCP Server provides a robust, privacy-focused bridge between AI assistants and the Presearch decentralized search engine. Built with enterprise-grade architecture, it offers intelligent caching, multi-format exports, and comprehensive web scraping capabilities while maintaining user privacy and data security.

šŸ”— Quick Links

✨ Key Features

šŸ›”ļø Privacy & Security

  • Decentralized Search: Leverages Presearch's distributed network for private, uncensored search results

  • No Tracking: Zero user data collection or behavioral tracking

  • Secure Authentication: Bearer token authentication with configurable API endpoints

šŸ”§ AI Integration

  • MCP Protocol Compliance: Full Model Context Protocol implementation for seamless AI assistant integration

  • Dual Transport Support: Both HTTP and STDIO transport modes

  • Smithery.ai Ready: Pre-configured for instant deployment on Smithery platform

šŸš€ Performance & Reliability

  • Intelligent Caching: Redis-compatible caching with configurable TTL and key limits

  • Rate Limiting: Built-in request throttling and retry mechanisms

  • Error Handling: Comprehensive error handling with detailed logging

  • Health Monitoring: Built-in health checks and performance metrics

šŸ“Š Data Export & Analysis

  • Multi-Format Support: JSON, CSV, Markdown, HTML, and PDF export formats

  • Content Scraping: Intelligent web content extraction with metadata preservation

  • Batch Processing: Bulk search and scraping operations

  • Quality Scoring: Advanced result ranking and quality assessment

šŸš€ Quick Start

Prerequisites

  • Node.js ≄20.0.0

  • Presearch API key (Get yours here)

  • Docker (optional, for containerized deployment)

Installation

# Clone the repository git clone https://github.com/NosytLabs/presearch-search-api-mcp.git cd presearch-search-api-mcp # Install dependencies npm install # Configure environment cp .env.example .env

Configuration

Edit .env file with your settings:

# Required: Your Presearch API key PRESEARCH_API_KEY=your_presearch_api_key_here # Optional: API configuration PRESEARCH_BASE_URL=https://na-us-1.presearch.com # Optional: Server configuration PORT=3000 NODE_ENV=production # Optional: Cache settings CACHE_ENABLED=true CACHE_TTL=3600 CACHE_MAX_KEYS=1000 # Optional: Logging LOG_LEVEL=info

Start the Server

HTTP Mode (for remote connections):

npm start

STDIO Mode (for local AI assistants):

npm run start:stdio

Development Mode (with auto-reload):

npm run dev

Docker Mode (for containerized deployment):

docker build -t presearch-mcp-server . docker run -p 3000:3000 -v ./.env:/app/.env presearch-mcp-server

šŸ”Œ MCP Configuration

HTTP Transport Configuration

Add to your MCP client configuration:

{ "mcpServers": { "presearch": { "transport": "http", "url": "http://localhost:3000/mcp", "headers": { "Authorization": "Bearer your-api-key" } } } }

STDIO Transport Configuration

{ "mcpServers": { "presearch": { "transport": "stdio", "command": "node", "args": ["src/index.js", "--stdio"], "env": { "PRESEARCH_API_KEY": "your-api-key", "NODE_ENV": "production" } } } }

šŸ› ļø Available Tools

Search Operations

  • presearch_ai_search - Perform intelligent web searches with advanced filtering

  • presearch_search_and_scrape - Search and automatically scrape top results

  • export_search_results - Export search results in multiple formats

Content Extraction

  • scrape_url_content - Extract content from specified URLs

  • export_site_content - Export scraped content with formatting

  • content_analysis - Analyze content quality and relevance

System Management

  • presearch_health_check - Verify API connectivity and authentication

  • cache_stats - View cache performance metrics

  • cache_clear - Clear cached data

šŸ“– Usage Examples

Basic Search

{ "tool": "presearch_ai_search", "params": { "query": "decentralized finance trends 2024", "count": 10, "safesearch": "moderate", "freshness": "week" } }

Advanced Search with Scraping

{ "tool": "presearch_search_and_scrape", "params": { "query": "blockchain technology adoption", "count": 5, "scrape_count": 3, "lang": "en-US", "file_output": true, "export_format": "markdown" } }

Content Scraping

{ "tool": "scrape_url_content", "params": { "urls": [ "https://presearch.io", "https://example.com/article" ], "include_metadata": true, "export_format": "json" } }

Exporting Search Results

{ "tool": "export_search_results", "params": { "query": "latest advancements in AI", "count": 20, "export_format": "csv", "file_path": "/path/to/save/results.csv" } }

šŸ” Presearch API Authentication

Obtaining API Keys

  1. Visit Presearch Developer Portal

  2. Create a developer account

  3. Generate your API key (Bearer token)

  4. Add to your .env file: PRESEARCH_API_KEY=your_key_here

Authentication Headers

The server automatically handles authentication using the Authorization: Bearer <token> header format as specified in the Presearch API documentation.

API Endpoints

  • Base URL: https://na-us-1.presearch.com/v1/search

  • Authentication: Bearer token in Authorization header

  • Rate Limits: Configurable per your Presearch plan

🐳 Docker Deployment

Quick Docker Setup

# Build the image docker build -t presearch-mcp . # Run with environment variables docker run -p 3000:3000 -e PRESEARCH_API_KEY=your_key_here presearch-mcp

Docker Compose

version: '3.8' services: presearch-mcp: build: . ports: - "3000:3000" environment: - PRESEARCH_API_KEY=your_key_here - NODE_ENV=production volumes: - ./.env:/app/.env

šŸ”§ Development

Project Structure

presearch-search-api-mcp/ ā”œā”€ā”€ src/ │ ā”œā”€ā”€ index.js # Main server entry │ ā”œā”€ā”€ tools/ # MCP tool implementations │ ā”œā”€ā”€ utils/ # Utility functions │ └── config/ # Configuration files ā”œā”€ā”€ tests/ # Test files ā”œā”€ā”€ docs/ # Documentation └── exports/ # Generated exports

Available Scripts

  • npm start - Start HTTP server

  • npm run start:stdio - Start STDIO server

  • npm run dev - Development mode with auto-reload

  • npm run lint - Run ESLint

  • npm run format - Format code with Prettier

Testing

# Run tests npm test # Run with coverage npm run test:coverage

šŸ“Š Performance & Monitoring

Cache Metrics

  • Hit/miss ratios

  • Response time tracking

  • Memory usage monitoring

Health Checks

  • API connectivity verification

  • Authentication validation

  • System resource monitoring

šŸ”’ Security Features

Data Protection

  • No user data persistence

  • Encrypted API communications

  • Secure token handling

Rate Limiting

  • Configurable request limits

  • Automatic retry mechanisms

  • DDoS protection

šŸ“ License

MIT License - see LICENSE file for details.

šŸ¤ Contributing

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes

  4. Add tests

  5. Submit a pull request

šŸ› Support

🌟 Star History

Star History Chart


Built with ā¤ļø for the privacy-conscious AI community

-
security - not tested
A
license - permissive license
-
quality - not tested

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NosytLabs/presearch-search-api-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server