Integrates with Perplexity's API to provide web search capabilities and AI-generated answers with citations. Offers three-tier research workflow: search for finding sources, ask for AI-synthesized answers using the sonar model, and ask_more for comprehensive analysis using the sonar-pro model. Supports filtering by recency and domains, and includes options for images and related questions.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Perplexity MCP Serversearch for recent advancements in quantum computing"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Perplexity MCP Server
A FastMCP server that integrates with Perplexity's API to provide web search and grounded AI answers.
Features
Three-Tier Research Workflow
search- Ground yourself firstFind relevant sources before asking questions
Returns URLs, titles, and snippets
Use this when you don't know about a topic
ask- Get AI answers (DEFAULT)AI-synthesized answers with web grounding
Uses the
sonarmodel (fast and cost-effective)Includes citations and optional images/related questions
ask_more- Dig deeperMore comprehensive analysis for complex questions
Uses the
sonar-promodel (more capable but pricier)Use when
askdoesn't provide sufficient depth
Prerequisites
Python 3.10 or higher
uv (recommended) or pip
Local Setup
1. Install Dependencies
Using uv (recommended):
uv pip install -e .Or using pip:
pip install -e .2. Configure API Key
Copy the example environment file:
cp .env.example .envEdit .env and add your Perplexity API key:
PERPLEXITY_API_KEY=your_api_key_here3. Run the Server
Test the server locally:
uv run fastmcp run server.pyOr with the fastmcp CLI:
fastmcp run server.py4. Install in Claude Desktop
Install the server for use with Claude Desktop:
fastmcp install claude-code server.pyOr manually add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"perplexity": {
"command": "uv",
"args": ["run", "fastmcp", "run", "/absolute/path/to/server.py"],
"env": {
"PERPLEXITY_API_KEY": "your_api_key_here"
}
}
}
}Cloud Deployment (FastMCP Cloud)
Deploy to fastmcp.cloud for easy hosting:
1. Push to GitHub
git init
git add .
git commit -m "Initial commit"
git remote add origin https://github.com/yourusername/perplexity-mcp.git
git push -u origin main2. Deploy on FastMCP Cloud
Visit fastmcp.cloud
Sign in with GitHub
Create a new project and connect your repo
Configure:
Entrypoint:
server.pyEnvironment Variables: Add
PERPLEXITY_API_KEY
Deploy!
Your server will be available at https://your-project-name.fastmcp.app/mcp
FastMCP Cloud automatically:
✅ Detects dependencies from
pyproject.toml✅ Deploys on every push to
main✅ Creates preview deployments for PRs
✅ Handles HTTP transport and authentication
Tool Usage Guide
Research Workflow Example
1. Don't know about a topic? → Use search()
search("latest AI research papers on transformers")
2. Found sources? → Use ask() to understand
ask("What are the key innovations in transformer models?")
3. Need more depth? → Use ask_more()
ask_more("Explain the mathematical foundations of attention mechanisms in transformers")Tool Parameters
search(query, max_results=10, recency=None, domain_filter=None)
query: Search query stringmax_results: Number of results (default: 10)recency: Filter by time -"day","week","month", or"year"domain_filter: Include/exclude domainsInclude:
["wikipedia.org", "github.com"]Exclude:
["-reddit.com", "-pinterest.com"]
ask(query, reasoning_effort="medium", ...)
query: Question to askreasoning_effort:"low","medium"(default), or"high"search_mode:"web"(default),"academic", or"sec"recency: Time filterdomain_filter: Domain filterreturn_images: Include images (default: False)return_related_questions: Include follow-up questions (default: False)
ask_more(query, reasoning_effort="medium", ...)
Same parameters as ask(), but uses the more powerful sonar-pro model.
Cost Optimization
Start with
search: Free/cheap way to find sourcesDefault to
ask: Usessonar(cost-effective)Escalate to
ask_more: Only when needed (more expensive)
Development
Project Structure
perplexity-mcp/
├── server.py # Main FastMCP server
├── pyproject.toml # Dependencies
├── .env.example # Environment template
└── README.md # This fileInspect the Server
See what FastMCP Cloud will see:
fastmcp inspect server.pyAPI Reference
This server uses two Perplexity API endpoints:
Search API (
/search) - Returns ranked search resultsChat Completions API (
/chat/completions) - Returns AI-generated answers
Supported models:
sonar- Fast, cost-effectivesonar-pro- More comprehensive
Troubleshooting
API Key Issues
If you get authentication errors:
Verify your API key at https://www.perplexity.ai/settings/api
Check that
PERPLEXITY_API_KEYis set correctlyMake sure there are no extra spaces or quotes
Timeout Errors
If requests timeout:
The default timeout is 30s for search, 60s for chat
Complex questions may take longer
Consider using
reasoning_effort="low"for faster responses
Local Testing
Test individual tools:
uv run fastmcp dev server.pyThis opens an interactive development interface.
License
MIT
Contributing
Contributions welcome! Please open an issue or PR.
Links
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.