Skip to main content
Glama
Rohit-Seelam

Perplexity MCP Server

by Rohit-Seelam

perplexity_medium

Analyze complex questions with moderate research depth, providing technical explanations and citations for informed decision-making.

Instructions

Enhanced reasoning with moderate search depth using sonar-reasoning-pro.

Best for: Complex questions requiring analysis, moderate research depth, 
technical explanations with citations.
Uses medium reasoning effort and search context size.

Args:
    query: The question or prompt to send to Perplexity
    messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})

Returns:
    Dictionary with content and citations

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
messagesNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function for the 'perplexity_medium' tool. It prepares messages, fetches the medium configuration from TOOL_CONFIGS, calls the PerplexityClient's chat_completion, formats the response, and handles errors.
    @mcp.tool()
    def perplexity_medium(query: str, messages: List[Dict[str, str]] = None) -> Dict[str, Any]:
        """
        Enhanced reasoning with moderate search depth using sonar-reasoning-pro.
        
        Best for: Complex questions requiring analysis, moderate research depth, 
        technical explanations with citations.
        Uses medium reasoning effort and search context size.
        
        Args:
            query: The question or prompt to send to Perplexity
            messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})
        
        Returns:
            Dictionary with content and citations
        """
        try:
            client = get_perplexity_client()
            
            # Prepare messages
            if messages is None:
                messages = []
            
            # Add the current query
            messages.append({"role": "user", "content": query})
            
            # Get tool configuration
            config = TOOL_CONFIGS["medium"]
            
            # Make API request
            response = client.chat_completion(messages=messages, **config)
            
            # Format and return response
            return client.format_response(response)
            
        except Exception as e:
            logger.exception("Error in perplexity_medium")
            return {
                "error": "tool_error",
                "message": f"Failed to process query: {str(e)}"
            }
  • Schema/configuration defining parameters for the perplexity_medium tool, including model, reasoning effort, and web search options.
    "medium": {
        "model": "sonar-reasoning-pro",
        "reasoning_effort": "medium",
        "web_search_options": {
            "search_context_size": "medium"
        }
    },
  • The PerplexityClient class providing chat_completion and format_response methods, which perform the actual API interaction and response processing used by perplexity_medium.
    class PerplexityClient:
        """Client for interacting with Perplexity API."""
        
        def __init__(self):
            """Initialize the Perplexity client."""
            self.api_key = get_api_key()
            self.base_url = PERPLEXITY_BASE_URL
            self.timeout = PERPLEXITY_TIMEOUT
            self.headers = {
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            }
        
        def chat_completion(self,messages: List[Dict[str, str]],model: str,**kwargs) -> Dict[str, Any]:
            """
            Send a chat completion request to Perplexity API.
            
            Args:
                messages: List of message objects with role and content
                model: Model name (e.g., "sonar-pro", "sonar-reasoning-pro", "sonar-deep-research")
                **kwargs: Additional parameters for the API request
                
            Returns:
                Dict containing the API response
            """
            try:
                # Prepare request payload
                payload = {
                    "model": model,
                    "messages": messages,
                    **kwargs
                }
                
                # Log request details to stderr
                logger.info(f"Making request to Perplexity API with model: {model}")
                logger.debug(f"Request payload: {payload}")
                
                # Make the API request
                with httpx.Client(timeout=self.timeout) as client:
                    response = client.post(
                        f"{self.base_url}/chat/completions",
                        headers=self.headers,
                        json=payload
                    )
                    
                    # Check for HTTP errors
                    response.raise_for_status()
                    
                    # Parse response
                    result = response.json()
                    
                    # Log response details to stderr
                    logger.info(f"Received response from Perplexity API")
                    if "usage" in result:
                        usage = result["usage"]
                        logger.info(f"Token usage - Prompt: {usage.get('prompt_tokens', 0)}, "
                                  f"Completion: {usage.get('completion_tokens', 0)}, "
                                  f"Total: {usage.get('total_tokens', 0)}")
                    
                    return result
                    
            except httpx.HTTPStatusError as e:
                logger.error(f"HTTP error from Perplexity API: {e.response.status_code}")
                logger.error(f"Response content: {e.response.text}")
                return {
                    "error": f"HTTP {e.response.status_code}",
                    "message": f"API request failed: {e.response.text}"
                }
                
            except httpx.TimeoutException:
                logger.error(f"Request timeout after {self.timeout} seconds")
                return {
                    "error": "timeout",
                    "message": f"Request timed out after {self.timeout} seconds"
                }
                
            except Exception as e:
                logger.exception("Unexpected error in chat_completion")
                return {
                    "error": "unexpected_error",
                    "message": f"Unexpected error: {str(e)}"
                }
        
        def format_response(self, api_response: Dict[str, Any]) -> Dict[str, Any]:
            """
            Format API response for MCP tool return.
            
            Args:
                api_response: Raw API response from Perplexity
                
            Returns:
                Formatted response for MCP tool with only content and citations
            """
            # Handle error responses
            if "error" in api_response:
                return api_response
            
            try:
                # Extract main content
                content = ""
                if "choices" in api_response and api_response["choices"]:
                    content = api_response["choices"][0]["message"]["content"]
                    
                    # Remove <think>...</think> sections for reasoning models
                    # This removes the thinking tokens that appear in medium/large responses
                    content = re.sub(r'<think>.*?</think>', '', content, flags=re.DOTALL)
                    
                    # Clean up any extra whitespace left after removing think tags
                    content = re.sub(r'\n\s*\n\s*\n', '\n\n', content)
                    content = content.strip()
                
                # Format response - only include content and citations
                formatted = {
                    "content": content,
                    "citations": api_response.get("citations", [])
                }
                
                return formatted
                
            except Exception as e:
                logger.exception("Error formatting API response")
                return {
                    "error": "format_error",
                    "message": f"Failed to format response: {str(e)}"
                }
  • Helper function to lazily initialize and retrieve the shared PerplexityClient instance used by all perplexity tools including medium.
    def get_perplexity_client() -> PerplexityClient:
        """Get or create the Perplexity client instance."""
        global perplexity_client
        if perplexity_client is None:
            perplexity_client = PerplexityClient()
        return perplexity_client
  • server.py:22-23 (registration)
    Initialization of the FastMCP server instance where tools like perplexity_medium are registered via decorators.
    mcp = FastMCP("Perplexity MCP")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's reasoning approach ('enhanced reasoning'), search characteristics ('moderate search depth'), and mentions it returns citations. However, it doesn't disclose important behavioral aspects like rate limits, authentication requirements, error conditions, or what happens with the optional messages parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and efficiently organized with clear sections: purpose statement, 'Best for' guidelines, and parameter explanations. Every sentence adds value, and the information is front-loaded with the most important details first. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (returns dictionary with content and citations), the description doesn't need to explain return values in detail. It covers the tool's purpose, usage guidelines, and parameter semantics adequately. However, for a reasoning/search tool with no annotations, it could provide more behavioral context about limitations, performance characteristics, or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for the lack of parameter documentation in the schema. It provides clear explanations for both parameters: 'query: The question or prompt to send to Perplexity' and 'messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})'. This adds substantial value beyond the bare schema, though it could provide more detail about message format expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'enhanced reasoning with moderate search depth using sonar-reasoning-pro' and specifies it's for 'complex questions requiring analysis, moderate research depth, technical explanations with citations.' This provides a specific verb ('reasoning') and resource ('search'), though it doesn't explicitly differentiate from siblings beyond mentioning 'moderate' depth.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes an explicit 'Best for:' section that lists specific use cases (complex questions requiring analysis, moderate research depth, technical explanations with citations). While it doesn't explicitly say when NOT to use it or name alternatives, the context of having sibling tools (perplexity_large, perplexity_small) combined with the 'moderate' qualifier provides clear guidance on when this specific tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Rohit-Seelam/Perplexity_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server