Required for running the MCP server, with Python 3.7+ specified as a prerequisite
Enables search through Reddit content via the redditSearch focus mode
Provides access to Wolfram Alpha search capabilities via the wolframAlphaSearch focus mode
Allows searching YouTube content through the youtubeSearch focus mode
Perplexica MCP Server
A Model Context Protocol (MCP) server that provides search functionality using Perplexica's AI-powered search engine.
Features
- Search Tool: AI-powered web search with multiple focus modes
- Multiple Transport Support: stdio, SSE, and Streamable HTTP transports
- FastMCP Integration: Built using FastMCP for robust MCP protocol compliance
- Unified Architecture: Single server implementation supporting all transport modes
- Production Ready: Docker support with security best practices
Installation
From PyPI (Recommended)
From Source
MCP Client Configuration
To use this server with MCP clients, you need to configure the client to connect to the Perplexica MCP server. Below are configuration examples for popular MCP clients.
Claude Desktop
Stdio Transport (Recommended)
Add the following to your Claude Desktop configuration file:
Location: ~/Library/Application Support/Claude/claude_desktop_config.json
(macOS) or %APPDATA%\Claude\claude_desktop_config.json
(Windows)
Alternative (from source):
SSE Transport
For SSE transport, first start the server:
Then configure Claude Desktop:
Cursor IDE
Add to your Cursor MCP configuration:
Alternative (from source):
Generic MCP Client Configuration
For any MCP client supporting stdio transport:
For HTTP/SSE transport clients:
Configuration Notes
- Path Configuration: Replace
/path/to/perplexica-mcp/
with the actual path to your installation - Perplexica URL: Ensure
PERPLEXICA_BACKEND_URL
points to your running Perplexica instance - Transport Selection:
- Use stdio for most MCP clients (Claude Desktop, Cursor)
- Use SSE for web-based clients or real-time applications
- Use HTTP for REST API integrations
- Dependencies: Ensure
uvx
is installed and available in your PATH (oruv
for source installations)
Troubleshooting
- Server not starting: Check that
uvx
(oruv
for source) is installed and the path is correct - Connection refused: Verify Perplexica is running and accessible at the configured URL
- Permission errors: Ensure the MCP client has permission to execute the server command
- Environment variables: Check that
PERPLEXICA_BACKEND_URL
is properly set
Server Configuration
Create a .env
file in the project root with your Perplexica configuration:
Usage
The server supports three transport modes:
1. Stdio Transport
2. SSE Transport
3. Streamable HTTP Transport
Docker Deployment
The server includes Docker support with multiple transport configurations for containerized deployments.
Prerequisites
- Docker and Docker Compose installed
- External Docker network named
backend
(for integration with Perplexica)
Create External Network
Build and Run
Option 1: HTTP Transport (Streamable HTTP)
Option 2: SSE Transport (Server-Sent Events)
Environment Configuration
Both Docker configurations support environment variables:
Or set environment variables directly in the compose file:
Container Details
Transport | Container Name | Port | Endpoint | Health Check |
---|---|---|---|---|
HTTP | perplexica-mcp-http | 3001 | /mcp/ | MCP initialize request |
SSE | perplexica-mcp-sse | 3001 | /sse | SSE endpoint check |
Health Monitoring
Both containers include health checks:
Integration with Perplexica
The Docker setup assumes Perplexica is running in the same Docker network:
Production Considerations
- Both containers use
restart: unless-stopped
for reliability - Health checks ensure service availability
- External network allows integration with existing Perplexica deployments
- Security best practices implemented in Dockerfile
Available Tools
search
Performs AI-powered web search using Perplexica.
Parameters:
query
(string, required): Search queryfocus_mode
(string, required): One of 'webSearch', 'academicSearch', 'writingAssistant', 'wolframAlphaSearch', 'youtubeSearch', 'redditSearch'chat_model
(string, optional): Chat model configurationembedding_model
(string, optional): Embedding model configurationoptimization_mode
(string, optional): 'speed' or 'balanced'history
(array, optional): Conversation historysystem_instructions
(string, optional): Custom instructionsstream
(boolean, optional): Whether to stream responses
Testing
Run the comprehensive test suite to verify all transports:
This will test:
- ✓ Stdio transport with MCP protocol handshake
- ✓ HTTP transport with Streamable HTTP compliance
- ✓ SSE transport endpoint accessibility
Transport Details
Stdio Transport
- Uses FastMCP's built-in stdio server
- Supports full MCP protocol including initialization and tool listing
- Ideal for MCP client integration
SSE Transport
- Server-Sent Events for real-time communication
- Endpoint:
http://host:port/sse
- Includes periodic ping messages for connection health
Streamable HTTP Transport
- Compliant with MCP Streamable HTTP specification
- Endpoint:
http://host:port/mcp
- Returns 307 redirect to
/mcp/
as per protocol - Uses StreamableHTTPSessionManager for proper session handling
Development
The server is built using:
- FastMCP: Modern MCP server framework with built-in transport support
- Uvicorn: ASGI server for SSE and HTTP transports
- httpx: HTTP client for Perplexica API communication
- python-dotenv: Environment variable management
Architecture
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
Support
For issues and questions:
- Check the troubleshooting section
- Review the Perplexica documentation
- Open an issue on GitHub
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A Model Context Protocol server that acts as a proxy to provide LLM access to Perplexica's AI-powered search engine, enabling AI assistants to perform searches with various focus modes.
Related MCP Servers
- -securityFlicense-qualityA Model Context Protocol server that enables LLMs to interact with Elasticsearch clusters, allowing them to manage indices and execute search queries using natural language.Last updated -1JavaScript
- -securityFlicense-qualityA Model Context Protocol server that provides AI assistants with structured access to your Logseq knowledge graph, enabling retrieval, searching, analysis, and creation of content within your personal knowledge base.Last updated -19TypeScript
- -securityFlicense-qualityA Model Context Protocol server that enables AI assistants like Claude to perform real-time web searches using the Exa AI Search API in a safe and controlled manner.Last updated -1,964
- AsecurityAlicenseAqualityA Model Context Protocol server that enables AI assistants to interact with Confluence content, supporting operations like retrieving, searching, creating, and updating pages and spaces.Last updated -93TypeScriptMIT License