Enables web searches with Reddit-focused results through Perplexica's search API, allowing queries to be specifically targeted at Reddit content.
Provides access to Wolfram Alpha computational knowledge through Perplexica's search API, enabling computational and factual queries.
Enables YouTube-focused searches through Perplexica's search API, allowing queries to find and surface video content from YouTube.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP Perplexicasearch for recent advancements in quantum computing with academic focus"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Perplexica
MCP server proxy for Perplexica search API.
This server allows LLMs to perform web searches through Perplexica using the Model Context Protocol (MCP).
Features
π Web search through Perplexica
π Multiple focus modes (web, academic, YouTube, Reddit, etc.)
β‘ Configurable optimization modes (speed, balanced, quality)
π§ Customizable model configuration
π Source citations in responses
π Multiple transport modes (stdio, SSE, Streamable HTTP)
Prerequisites
Python 3.11+
UV package manager
Running Perplexica instance
Installation
Clone the repository:
git clone https://github.com/Kaiohz/mcp-perplexica.git
cd mcp-perplexicaInstall dependencies with UV:
uv syncCreate your environment file:
cp .env.example .envEdit
.envwith your configuration:
# Perplexica API
PERPLEXICA_URL=http://localhost:3000
# Transport: stdio (default), sse, or streamable-http
TRANSPORT=stdio
HOST=127.0.0.1
PORT=8000
# Model configuration
DEFAULT_CHAT_MODEL_PROVIDER_ID=your-provider-id
DEFAULT_CHAT_MODEL_KEY=anthropic/claude-sonnet-4.5
DEFAULT_EMBEDDING_MODEL_PROVIDER_ID=your-provider-id
DEFAULT_EMBEDDING_MODEL_KEY=openai/text-embedding-3-smallUsage
Transport Modes
The server supports three transport modes:
Transport | Description | Use Case |
| Standard input/output | CLI tools, Claude Desktop |
| Server-Sent Events over HTTP | Web clients |
| Streamable HTTP (recommended for production) | Production deployments |
Running with Docker Compose
The easiest way to run both Perplexica and MCP Perplexica together:
# Copy and configure environment files
cp .env.example .env
cp .env.perplexica.example .env.perplexica
# Edit .env with your MCP Perplexica settings
# Edit .env.perplexica with your Perplexica settings
# Start services
docker compose up -dThis starts:
Perplexica on
http://localhost:3000MCP Perplexica connected to Perplexica
Running the MCP Server (without Docker)
Stdio mode (default)
uv run python src/main.pySSE mode
TRANSPORT=sse PORT=8000 uv run python src/main.pyStreamable HTTP mode
TRANSPORT=streamable-http PORT=8000 uv run python src/main.pyClaude Desktop Configuration
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"perplexica": {
"command": "uv",
"args": ["run", "--directory", "/path/to/mcp-perplexica", "python", "-m", "main"],
"env": {
"PERPLEXICA_URL": "http://localhost:3000",
"TRANSPORT": "stdio",
"DEFAULT_CHAT_MODEL_PROVIDER_ID": "your-provider-id",
"DEFAULT_CHAT_MODEL_KEY": "anthropic/claude-sonnet-4.5",
"DEFAULT_EMBEDDING_MODEL_PROVIDER_ID": "your-provider-id",
"DEFAULT_EMBEDDING_MODEL_KEY": "openai/text-embedding-3-small"
}
}
}
}Claude Code Configuration
For HTTP-based transports, you can add the server to Claude Code:
# Start the server with streamable-http transport
TRANSPORT=streamable-http PORT=8000 uv run python -m main
# Add to Claude Code
claude mcp add --transport http perplexica http://localhost:8000/mcpAvailable Tools
search
Perform a web search using Perplexica.
Parameters:
Parameter | Type | Required | Description |
| string | Yes | The search query |
| string | No | Search focus: |
| string | No | Optimization: |
| string | No | Custom instructions for AI response |
| string | No | Override default chat model provider |
| string | No | Override default chat model |
| string | No | Override default embedding provider |
| string | No | Override default embedding model |
Example:
Search for "latest developments in AI" using academic focusDevelopment
Install dev dependencies
uv sync --devRun tests
uv run pytestRun linter
uv run ruff check .
uv run ruff format .
uv run black src/Architecture
This project follows hexagonal architecture:
src/
βββ main.py # MCP server entry point
βββ config.py # Pydantic Settings
βββ dependencies.py # Dependency injection
βββ domain/ # Business core (pure Python)
β βββ entities.py # Dataclasses
β βββ ports.py # ABC interfaces
βββ application/ # Use cases
β βββ requests.py # Pydantic DTOs
β βββ use_cases.py # Business logic
βββ infrastructure/ # External adapters
βββ perplexica/
βββ adapter.py # HTTP clientLicense
MIT