Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| PERPLEXITY_API_KEY | Yes | Your Perplexity API key from settings page (https://www.perplexity.ai/settings/api) |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| perplexity_small | Quick and reliable queries using Perplexity's sonar-pro model.
Best for: Fast factual questions, basic research, immediate answers.
Uses default parameters for optimal speed and cost-effectiveness.
Args:
query: The question or prompt to send to Perplexity
messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})
Returns:
Dictionary with content and citations |
| perplexity_medium | Enhanced reasoning with moderate search depth using sonar-reasoning-pro.
Best for: Complex questions requiring analysis, moderate research depth,
technical explanations with citations.
Uses medium reasoning effort and search context size.
Args:
query: The question or prompt to send to Perplexity
messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})
Returns:
Dictionary with content and citations |
| perplexity_large | Comprehensive research with maximum depth using sonar-deep-research.
Best for: Deep research tasks, comprehensive analysis, complex multi-step reasoning,
academic research, detailed technical investigations.
Uses high reasoning effort and search context size.
WARNING: This tool may take significantly longer (potentially 10-30 minutes)
and may timeout on very complex queries.
Args:
query: The question or prompt to send to Perplexity
messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})
Returns:
Dictionary with content and citations |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |