MCP Perplexity Search
mcp-perplexity-search
A Model Context Protocol (MCP) server for integrating Perplexity's AI API with LLMs. This server provides advanced chat completion capabilities with specialized prompt templates for various use cases.
<a href="https://glama.ai/mcp/servers/zlqdizpsr9"> <img width="380" height="200" src="https://glama.ai/mcp/servers/zlqdizpsr9/badge" /> </a>Features
- 🤖 Advanced chat completion using Perplexity's AI models
- 📝 Predefined prompt templates for common scenarios:
- Technical documentation generation
- Security best practices analysis
- Code review and improvements
- API documentation in structured format
- 🎯 Custom template support for specialized use cases
- 📊 Multiple output formats (text, markdown, JSON)
- 🔍 Optional source URL inclusion in responses
- ⚙️ Configurable model parameters (temperature, max tokens)
- 🚀 Support for various Perplexity models including Sonar and LLaMA
Configuration
This server requires configuration through your MCP client. Here are examples for different environments:
Cline Configuration
Add this to your Cline MCP settings:
Claude Desktop with WSL Configuration
For WSL environments, add this to your Claude Desktop configuration:
Environment Variables
The server requires the following environment variable:
PERPLEXITY_API_KEY
: Your Perplexity API key (required)
API
The server implements a single MCP tool with configurable parameters:
chat_completion
Generate chat completions using the Perplexity API with support for specialized prompt templates.
Parameters:
messages
(array, required): Array of message objects with:role
(string): 'system', 'user', or 'assistant'content
(string): The message content
prompt_template
(string, optional): Predefined template to use:technical_docs
: Technical documentation with code examplessecurity_practices
: Security implementation guidelinescode_review
: Code analysis and improvementsapi_docs
: API documentation in JSON format
custom_template
(object, optional): Custom prompt template with:system
(string): System message for assistant behaviourformat
(string): Output format preferenceinclude_sources
(boolean): Whether to include sources
format
(string, optional): 'text', 'markdown', or 'json' (default: 'text')include_sources
(boolean, optional): Include source URLs (default: false)model
(string, optional): Perplexity model to use (default: 'sonar')temperature
(number, optional): Output randomness (0-1, default: 0.7)max_tokens
(number, optional): Maximum response length (default: 1024)
Development
Setup
- Clone the repository
- Install dependencies:
- Build the project:
- Run in development mode:
Publishing
The project uses changesets for version management. To publish:
- Create a changeset:
- Version the package:
- Publish to npm:
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT License - see the LICENSE file for details.
Acknowledgments
- Built on the Model Context Protocol
- Powered by Perplexity SONAR
You must be authenticated.
Enables integration of Perplexity's AI API with LLMs, delivering advanced chat completion by utilizing specialized prompt templates for tasks like technical documentation, code review, and API documentation.