Skip to main content
Glama
Rohit-Seelam

Perplexity MCP Server

by Rohit-Seelam

perplexity_large

Perform comprehensive research and detailed analysis using deep reasoning for complex queries, technical investigations, and academic topics.

Instructions

Comprehensive research with maximum depth using sonar-deep-research.

Best for: Deep research tasks, comprehensive analysis, complex multi-step reasoning,
academic research, detailed technical investigations.
Uses high reasoning effort and search context size.

WARNING: This tool may take significantly longer (potentially 10-30 minutes) 
and may timeout on very complex queries.

Args:
    query: The question or prompt to send to Perplexity
    messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})

Returns:
    Dictionary with content and citations

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
messagesNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler function for the 'perplexity_large' tool. Decorated with @mcp.tool() for automatic registration in FastMCP. Prepares messages, retrieves large config, calls PerplexityClient.chat_completion, formats response, and handles errors. Includes comprehensive docstring serving as schema description.
    @mcp.tool()
    def perplexity_large(query: str, messages: List[Dict[str, str]] = None) -> Dict[str, Any]:
        """
        Comprehensive research with maximum depth using sonar-deep-research.
        
        Best for: Deep research tasks, comprehensive analysis, complex multi-step reasoning,
        academic research, detailed technical investigations.
        Uses high reasoning effort and search context size.
        
        WARNING: This tool may take significantly longer (potentially 10-30 minutes) 
        and may timeout on very complex queries.
        
        Args:
            query: The question or prompt to send to Perplexity
            messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})
        
        Returns:
            Dictionary with content and citations
        """
        try:
            client = get_perplexity_client()
            
            # Prepare messages
            if messages is None:
                messages = []
            
            # Add the current query
            messages.append({"role": "user", "content": query})
            
            # Get tool configuration
            config = TOOL_CONFIGS["large"]
            
            # Log warning about potential timeout
            logger.warning(f"Starting deep research query - this may take 10-30 minutes")
            
            # Make API request
            response = client.chat_completion(messages=messages, **config)
            
            # Format and return response
            return client.format_response(response)
            
        except Exception as e:
            logger.exception("Error in perplexity_large")
            return {
                "error": "tool_error", 
                "message": f"Failed to process query: {str(e)}"
            }
  • Global TOOL_CONFIGS dictionary defining parameters for all tools, specifically TOOL_CONFIGS['large'] configures the sonar-deep-research model with high reasoning effort and search context size for the perplexity_large tool.
    TOOL_CONFIGS = {
        "small": {
            "model": "sonar-pro"
        },
        "medium": {
            "model": "sonar-reasoning-pro",
            "reasoning_effort": "medium",
            "web_search_options": {
                "search_context_size": "medium"
            }
        },
        "large": {
            "model": "sonar-deep-research",
            "reasoning_effort": "high", 
            "web_search_options": {
                "search_context_size": "high"
            }
        }
  • PerplexityClient methods chat_completion (invoked with large config) and format_response (used to process the API response for the tool). Handles HTTP requests to Perplexity API, error handling, logging, token usage, and response formatting including removal of <think> blocks specific to reasoning models like large.
    def chat_completion(self,messages: List[Dict[str, str]],model: str,**kwargs) -> Dict[str, Any]:
        """
        Send a chat completion request to Perplexity API.
        
        Args:
            messages: List of message objects with role and content
            model: Model name (e.g., "sonar-pro", "sonar-reasoning-pro", "sonar-deep-research")
            **kwargs: Additional parameters for the API request
            
        Returns:
            Dict containing the API response
        """
        try:
            # Prepare request payload
            payload = {
                "model": model,
                "messages": messages,
                **kwargs
            }
            
            # Log request details to stderr
            logger.info(f"Making request to Perplexity API with model: {model}")
            logger.debug(f"Request payload: {payload}")
            
            # Make the API request
            with httpx.Client(timeout=self.timeout) as client:
                response = client.post(
                    f"{self.base_url}/chat/completions",
                    headers=self.headers,
                    json=payload
                )
                
                # Check for HTTP errors
                response.raise_for_status()
                
                # Parse response
                result = response.json()
                
                # Log response details to stderr
                logger.info(f"Received response from Perplexity API")
                if "usage" in result:
                    usage = result["usage"]
                    logger.info(f"Token usage - Prompt: {usage.get('prompt_tokens', 0)}, "
                              f"Completion: {usage.get('completion_tokens', 0)}, "
                              f"Total: {usage.get('total_tokens', 0)}")
                
                return result
                
        except httpx.HTTPStatusError as e:
            logger.error(f"HTTP error from Perplexity API: {e.response.status_code}")
            logger.error(f"Response content: {e.response.text}")
            return {
                "error": f"HTTP {e.response.status_code}",
                "message": f"API request failed: {e.response.text}"
            }
            
        except httpx.TimeoutException:
            logger.error(f"Request timeout after {self.timeout} seconds")
            return {
                "error": "timeout",
                "message": f"Request timed out after {self.timeout} seconds"
            }
            
        except Exception as e:
            logger.exception("Unexpected error in chat_completion")
            return {
                "error": "unexpected_error",
                "message": f"Unexpected error: {str(e)}"
            }
    
    def format_response(self, api_response: Dict[str, Any]) -> Dict[str, Any]:
        """
        Format API response for MCP tool return.
        
        Args:
            api_response: Raw API response from Perplexity
            
        Returns:
            Formatted response for MCP tool with only content and citations
        """
        # Handle error responses
        if "error" in api_response:
            return api_response
        
        try:
            # Extract main content
            content = ""
            if "choices" in api_response and api_response["choices"]:
                content = api_response["choices"][0]["message"]["content"]
                
                # Remove <think>...</think> sections for reasoning models
                # This removes the thinking tokens that appear in medium/large responses
                content = re.sub(r'<think>.*?</think>', '', content, flags=re.DOTALL)
                
                # Clean up any extra whitespace left after removing think tags
                content = re.sub(r'\n\s*\n\s*\n', '\n\n', content)
                content = content.strip()
            
            # Format response - only include content and citations
            formatted = {
                "content": content,
                "citations": api_response.get("citations", [])
            }
            
            return formatted
            
        except Exception as e:
            logger.exception("Error formatting API response")
            return {
                "error": "format_error",
                "message": f"Failed to format response: {str(e)}"
            }
  • Utility function to lazily initialize and retrieve the shared PerplexityClient instance used by all perplexity_* tools including perplexity_large.
    def get_perplexity_client() -> PerplexityClient:
        """Get or create the Perplexity client instance."""
        global perplexity_client
        if perplexity_client is None:
            perplexity_client = PerplexityClient()
        return perplexity_client

Tool Definition Quality

Score is being calculated. Check back soon.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Rohit-Seelam/Perplexity_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server