The Perplexica MCP Server provides AI-powered web search capabilities through the Model Context Protocol (MCP). You can perform specialized searches using various focus modes including webSearch, academicSearch, writingAssistant, wolframAlphaSearch, youtubeSearch, and redditSearch. The server integrates with MCP clients like Claude Desktop and Cursor IDE via multiple transport protocols (stdio, SSE, HTTP) and supports flexible AI model configuration with providers like OpenAI, Anthropic, and Ollama. You can customize search behavior with optimization modes (speed/balanced), streaming responses, conversation history, and system instructions. The server is easily deployable through Docker with production-ready configurations, health monitoring, and can be installed via PyPI or source repository.
Required for running the MCP server, with Python 3.7+ specified as a prerequisite
Enables search through Reddit content via the redditSearch focus mode
Provides access to Wolfram Alpha search capabilities via the wolframAlphaSearch focus mode
Allows searching YouTube content through the youtubeSearch focus mode
Perplexica MCP Server
A Model Context Protocol (MCP) server that provides search functionality using Perplexica's AI-powered search engine.
Features
Search Tool: AI-powered web search with multiple focus modes
Multiple Transport Support: stdio, SSE, and Streamable HTTP transports
FastMCP Integration: Built using FastMCP for robust MCP protocol compliance
Unified Architecture: Single server implementation supporting all transport modes
Production Ready: Docker support with security best practices
Development Environment
For Claude Code Users
Important: If you are using Claude Code for development, this project requires the use of the container-use
MCP server for all development operations. All file operations, code changes, and shell commands must be executed within container-use environments.
Working with Container-Use (Claude Code Only)
When contributing to this project using Claude Code, you must:
Use Container-Use Only: All file operations, code editing, and shell commands must be performed using container-use environments
View Your Work: After making changes, inform others how to access your work:
Use
container-use log <env_id>
to view the development logUse
container-use checkout <env_id>
to check out your environment
No Local Operations: Do not perform file operations directly on the local filesystem
Example Development Workflow (Claude Code)
This ensures consistency, reproducibility, and proper version control for all development activities when using Claude Code.
For Other Development Environments
If you are not using Claude Code, you can develop normally using your preferred tools and IDE. The container-use requirement does not apply to regular development workflows.
Installation
From PyPI (Recommended)
From Source
MCP Client Configuration
To use this server with MCP clients, you need to configure the client to connect to the Perplexica MCP server. Below are configuration examples for popular MCP clients.
Claude Desktop
Stdio Transport (Recommended)
Add the following to your Claude Desktop configuration file:
Location: ~/Library/Application Support/Claude/claude_desktop_config.json
(macOS) or %APPDATA%\Claude\claude_desktop_config.json
(Windows)
Alternative (from source):
SSE Transport
For SSE transport, first start the server:
Then configure Claude Desktop:
Cursor IDE
Add to your Cursor MCP configuration:
Alternative (from source):
Generic MCP Client Configuration
For any MCP client supporting stdio transport:
For HTTP/SSE transport clients:
Configuration Notes
Path Configuration: Replace
/path/to/perplexica-mcp/
with the actual path to your installationPerplexica URL: Ensure
PERPLEXICA_BACKEND_URL
points to your running Perplexica instanceTransport Selection:
Use stdio for most MCP clients (Claude Desktop, Cursor)
Use SSE for web-based clients or real-time applications
Use HTTP for REST API integrations
Dependencies: Ensure
uvx
is installed and available in your PATH (oruv
for source installations)
Troubleshooting
Server not starting: Check that
uvx
(oruv
for source) is installed and the path is correctConnection refused: Verify Perplexica is running and accessible at the configured URL
Permission errors: Ensure the MCP client has permission to execute the server command
Environment variables: Check that
PERPLEXICA_BACKEND_URL
is properly set
Server Configuration
Create a .env
file in the project root with your Perplexica configuration:
Environment Variables
Variable | Description | Default | Example |
| URL to Perplexica search API |
|
|
| Default chat model provider | None |
,
,
|
| Default chat model name | None |
,
|
| Default embedding model provider | None |
,
|
| Default embedding model name | None |
|
Note: The model environment variables are optional. If not set, you'll need to specify models in each search request. When set, they provide convenient defaults that can still be overridden per request.
Usage
The server supports three transport modes:
1. Stdio Transport
2. SSE Transport
3. Streamable HTTP Transport
Docker Deployment
The server includes Docker support with multiple transport configurations for containerized deployments.
Prerequisites
Docker and Docker Compose installed
External Docker network named
backend
(for integration with Perplexica)
Create External Network
Build and Run
Option 1: HTTP Transport (Streamable HTTP)
Option 2: SSE Transport (Server-Sent Events)
Environment Configuration
Both Docker configurations support environment variables:
Or set environment variables directly in the compose file:
Container Details
Transport | Container Name | Port | Endpoint | Health Check |
HTTP |
| 3001 |
| MCP initialize request |
SSE |
| 3001 |
| SSE endpoint check |
Health Monitoring
Both containers include health checks:
Integration with Perplexica
The Docker setup assumes Perplexica is running in the same Docker network:
Production Considerations
Both containers use
restart: unless-stopped
for reliabilityHealth checks ensure service availability
External network allows integration with existing Perplexica deployments
Security best practices implemented in Dockerfile
Available Tools
search
Performs AI-powered web search using Perplexica.
Parameters:
query
(string, required): Search queryfocus_mode
(string, required): One of 'webSearch', 'academicSearch', 'writingAssistant', 'wolframAlphaSearch', 'youtubeSearch', 'redditSearch'chat_model
(string, optional): Chat model configurationembedding_model
(string, optional): Embedding model configurationoptimization_mode
(string, optional): 'speed' or 'balanced'history
(array, optional): Conversation historysystem_instructions
(string, optional): Custom instructionsstream
(boolean, optional): Whether to stream responses
Testing
Run the comprehensive test suite to verify all transports:
This will test:
✓ Stdio transport with MCP protocol handshake
✓ HTTP transport with Streamable HTTP compliance
✓ SSE transport endpoint accessibility
Transport Details
Stdio Transport
Uses FastMCP's built-in stdio server
Supports full MCP protocol including initialization and tool listing
Ideal for MCP client integration
SSE Transport
Server-Sent Events for real-time communication
Endpoint:
http://host:port/sse
Includes periodic ping messages for connection health
Streamable HTTP Transport
Compliant with MCP Streamable HTTP specification
Endpoint:
http://host:port/mcp
Returns 307 redirect to
/mcp/
as per protocolUses StreamableHTTPSessionManager for proper session handling
Development
The server is built using:
FastMCP: Modern MCP server framework with built-in transport support
Uvicorn: ASGI server for SSE and HTTP transports
httpx: HTTP client for Perplexica API communication
python-dotenv: Environment variable management
Architecture
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contributing
Fork the repository
Create a feature branch (using container-use environments if using Claude Code)
Make your changes (within container-use environment if using Claude Code)
Add tests if applicable
Submit a pull request
If using Claude Code, provide access to your work via
container-use log <env_id>
andcontainer-use checkout <env_id>
Support
For issues and questions:
Check the troubleshooting section
Review the Perplexica documentation
Open an issue on GitHub
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A Model Context Protocol server that acts as a proxy to provide LLM access to Perplexica's AI-powered search engine, enabling AI assistants to perform searches with various focus modes.
Related MCP Servers
- -securityFlicense-qualityA Model Context Protocol server that provides AI assistants with structured access to your Logseq knowledge graph, enabling retrieval, searching, analysis, and creation of content within your personal knowledge base.Last updated -51
- AsecurityFlicenseAqualityA Model Context Protocol server that enables AI assistants like Claude to perform real-time web searches using the Exa AI Search API in a safe and controlled manner.Last updated -610,191
- -securityAlicense-qualityA Model Context Protocol server that provides real-time web search capabilities to AI assistants through pluggable search providers, currently integrated with the Brave Search API.Last updated -13MIT License
- AsecurityAlicenseAqualityA Model Context Protocol server that enables AI assistants to query and manage Plex Media Server content through natural language, providing library access, viewing statistics, and media management capabilities.Last updated -125JavaScriptMIT License