Skip to main content
Glama
Rohit-Seelam

Perplexity MCP Server

by Rohit-Seelam

perplexity_large

Perform comprehensive research and detailed analysis using deep reasoning for complex queries, technical investigations, and academic topics.

Instructions

Comprehensive research with maximum depth using sonar-deep-research.

Best for: Deep research tasks, comprehensive analysis, complex multi-step reasoning,
academic research, detailed technical investigations.
Uses high reasoning effort and search context size.

WARNING: This tool may take significantly longer (potentially 10-30 minutes) 
and may timeout on very complex queries.

Args:
    query: The question or prompt to send to Perplexity
    messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})

Returns:
    Dictionary with content and citations

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
messagesNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler function for the 'perplexity_large' tool. Decorated with @mcp.tool() for automatic registration in FastMCP. Prepares messages, retrieves large config, calls PerplexityClient.chat_completion, formats response, and handles errors. Includes comprehensive docstring serving as schema description.
    @mcp.tool()
    def perplexity_large(query: str, messages: List[Dict[str, str]] = None) -> Dict[str, Any]:
        """
        Comprehensive research with maximum depth using sonar-deep-research.
        
        Best for: Deep research tasks, comprehensive analysis, complex multi-step reasoning,
        academic research, detailed technical investigations.
        Uses high reasoning effort and search context size.
        
        WARNING: This tool may take significantly longer (potentially 10-30 minutes) 
        and may timeout on very complex queries.
        
        Args:
            query: The question or prompt to send to Perplexity
            messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})
        
        Returns:
            Dictionary with content and citations
        """
        try:
            client = get_perplexity_client()
            
            # Prepare messages
            if messages is None:
                messages = []
            
            # Add the current query
            messages.append({"role": "user", "content": query})
            
            # Get tool configuration
            config = TOOL_CONFIGS["large"]
            
            # Log warning about potential timeout
            logger.warning(f"Starting deep research query - this may take 10-30 minutes")
            
            # Make API request
            response = client.chat_completion(messages=messages, **config)
            
            # Format and return response
            return client.format_response(response)
            
        except Exception as e:
            logger.exception("Error in perplexity_large")
            return {
                "error": "tool_error", 
                "message": f"Failed to process query: {str(e)}"
            }
  • Global TOOL_CONFIGS dictionary defining parameters for all tools, specifically TOOL_CONFIGS['large'] configures the sonar-deep-research model with high reasoning effort and search context size for the perplexity_large tool.
    TOOL_CONFIGS = {
        "small": {
            "model": "sonar-pro"
        },
        "medium": {
            "model": "sonar-reasoning-pro",
            "reasoning_effort": "medium",
            "web_search_options": {
                "search_context_size": "medium"
            }
        },
        "large": {
            "model": "sonar-deep-research",
            "reasoning_effort": "high", 
            "web_search_options": {
                "search_context_size": "high"
            }
        }
  • PerplexityClient methods chat_completion (invoked with large config) and format_response (used to process the API response for the tool). Handles HTTP requests to Perplexity API, error handling, logging, token usage, and response formatting including removal of <think> blocks specific to reasoning models like large.
    def chat_completion(self,messages: List[Dict[str, str]],model: str,**kwargs) -> Dict[str, Any]:
        """
        Send a chat completion request to Perplexity API.
        
        Args:
            messages: List of message objects with role and content
            model: Model name (e.g., "sonar-pro", "sonar-reasoning-pro", "sonar-deep-research")
            **kwargs: Additional parameters for the API request
            
        Returns:
            Dict containing the API response
        """
        try:
            # Prepare request payload
            payload = {
                "model": model,
                "messages": messages,
                **kwargs
            }
            
            # Log request details to stderr
            logger.info(f"Making request to Perplexity API with model: {model}")
            logger.debug(f"Request payload: {payload}")
            
            # Make the API request
            with httpx.Client(timeout=self.timeout) as client:
                response = client.post(
                    f"{self.base_url}/chat/completions",
                    headers=self.headers,
                    json=payload
                )
                
                # Check for HTTP errors
                response.raise_for_status()
                
                # Parse response
                result = response.json()
                
                # Log response details to stderr
                logger.info(f"Received response from Perplexity API")
                if "usage" in result:
                    usage = result["usage"]
                    logger.info(f"Token usage - Prompt: {usage.get('prompt_tokens', 0)}, "
                              f"Completion: {usage.get('completion_tokens', 0)}, "
                              f"Total: {usage.get('total_tokens', 0)}")
                
                return result
                
        except httpx.HTTPStatusError as e:
            logger.error(f"HTTP error from Perplexity API: {e.response.status_code}")
            logger.error(f"Response content: {e.response.text}")
            return {
                "error": f"HTTP {e.response.status_code}",
                "message": f"API request failed: {e.response.text}"
            }
            
        except httpx.TimeoutException:
            logger.error(f"Request timeout after {self.timeout} seconds")
            return {
                "error": "timeout",
                "message": f"Request timed out after {self.timeout} seconds"
            }
            
        except Exception as e:
            logger.exception("Unexpected error in chat_completion")
            return {
                "error": "unexpected_error",
                "message": f"Unexpected error: {str(e)}"
            }
    
    def format_response(self, api_response: Dict[str, Any]) -> Dict[str, Any]:
        """
        Format API response for MCP tool return.
        
        Args:
            api_response: Raw API response from Perplexity
            
        Returns:
            Formatted response for MCP tool with only content and citations
        """
        # Handle error responses
        if "error" in api_response:
            return api_response
        
        try:
            # Extract main content
            content = ""
            if "choices" in api_response and api_response["choices"]:
                content = api_response["choices"][0]["message"]["content"]
                
                # Remove <think>...</think> sections for reasoning models
                # This removes the thinking tokens that appear in medium/large responses
                content = re.sub(r'<think>.*?</think>', '', content, flags=re.DOTALL)
                
                # Clean up any extra whitespace left after removing think tags
                content = re.sub(r'\n\s*\n\s*\n', '\n\n', content)
                content = content.strip()
            
            # Format response - only include content and citations
            formatted = {
                "content": content,
                "citations": api_response.get("citations", [])
            }
            
            return formatted
            
        except Exception as e:
            logger.exception("Error formatting API response")
            return {
                "error": "format_error",
                "message": f"Failed to format response: {str(e)}"
            }
  • Utility function to lazily initialize and retrieve the shared PerplexityClient instance used by all perplexity_* tools including perplexity_large.
    def get_perplexity_client() -> PerplexityClient:
        """Get or create the Perplexity client instance."""
        global perplexity_client
        if perplexity_client is None:
            perplexity_client = PerplexityClient()
        return perplexity_client
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool 'may take significantly longer (potentially 10-30 minutes) and may timeout on very complex queries,' uses 'high reasoning effort and search context size,' and returns 'Dictionary with content and citations.' This covers execution time, resource usage, and output format without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage guidelines, behavioral warnings, and parameter explanations. Every sentence adds value: the first defines the tool, the second specifies use cases, the third details behavioral traits, and the last sections document parameters and returns. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (deep research with potential timeouts), no annotations, and an output schema present (which handles return values), the description is complete. It covers purpose, usage, behavioral transparency (including critical timeout warnings), and parameter semantics, providing all necessary context for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaningful context for both parameters: 'query: The question or prompt to send to Perplexity' and 'messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})'. This clarifies the purpose and format of each parameter beyond the bare schema, though it doesn't provide exhaustive details like message structure constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'Comprehensive research with maximum depth using sonar-deep-research' and specifies it's for 'Deep research tasks, comprehensive analysis, complex multi-step reasoning, academic research, detailed technical investigations.' This provides a specific verb (research) with clear scope and distinguishes it from sibling tools (perplexity_medium, perplexity_small) by emphasizing maximum depth and comprehensive analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'Best for: Deep research tasks, comprehensive analysis, complex multi-step reasoning, academic research, detailed technical investigations' and includes a WARNING about longer execution times. This provides clear guidance on when to use this tool versus alternatives (implied to be the other perplexity tools for less intensive tasks) and when not to use it (time-sensitive queries).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Rohit-Seelam/Perplexity_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server