Skip to main content
Glama
Rohit-Seelam

Perplexity MCP Server

by Rohit-Seelam

perplexity_small

Query Perplexity's sonar-pro model for fast factual answers and basic research with optimal speed and cost-effectiveness.

Instructions

Quick and reliable queries using Perplexity's sonar-pro model.

Best for: Fast factual questions, basic research, immediate answers.
Uses default parameters for optimal speed and cost-effectiveness.

Args:
    query: The question or prompt to send to Perplexity
    messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})

Returns:
    Dictionary with content and citations

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
messagesNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • server.py:36-76 (handler)
    The main handler function for the 'perplexity_small' tool. It is decorated with @mcp.tool() for registration, processes the input query by appending to messages, uses TOOL_CONFIGS['small'] to call PerplexityClient.chat_completion, formats the response, and handles errors.
    @mcp.tool()
    def perplexity_small(query: str, messages: List[Dict[str, str]] = None) -> Dict[str, Any]:
        """
        Quick and reliable queries using Perplexity's sonar-pro model.
        
        Best for: Fast factual questions, basic research, immediate answers.
        Uses default parameters for optimal speed and cost-effectiveness.
        
        Args:
            query: The question or prompt to send to Perplexity
            messages: Optional conversation context (list of {"role": "user/assistant", "content": "..."})
        
        Returns:
            Dictionary with content and citations
        """
        try:
            client = get_perplexity_client()
            
            # Prepare messages
            if messages is None:
                messages = []
            
            # Add the current query
            messages.append({"role": "user", "content": query})
            
            # Get tool configuration
            config = TOOL_CONFIGS["small"]
            
            # Make API request
            response = client.chat_completion(messages=messages, **config)
            
            # Format and return response
            return client.format_response(response)
            
        except Exception as e:
            logger.exception("Error in perplexity_small")
            return {
                "error": "tool_error",
                "message": f"Failed to process query: {str(e)}"
            }
  • TOOL_CONFIGS['small'] configuration specifying the 'sonar-pro' model used by the perplexity_small tool.
    "small": {
        "model": "sonar-pro"
    },
  • Helper function to get or lazily initialize the shared PerplexityClient instance used by the tool.
    def get_perplexity_client() -> PerplexityClient:
        """Get or create the Perplexity client instance."""
        global perplexity_client
        if perplexity_client is None:
            perplexity_client = PerplexityClient()
        return perplexity_client
  • server.py:36-36 (registration)
    The @mcp.tool() decorator registers the perplexity_small function as an MCP tool.
    @mcp.tool()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses behavioral traits like 'quick and reliable,' 'fast factual questions,' and 'optimal speed and cost-effectiveness,' which adds context about performance and constraints. However, it lacks details on rate limits, error handling, or authentication needs, leaving some gaps in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It front-loads the purpose and usage guidelines, followed by clear sections for Args and Returns. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a query tool with 2 parameters), no annotations, and an output schema exists (indicating returns a dictionary with content and citations), the description is mostly complete. It covers purpose, usage, parameters, and returns, but could benefit from more behavioral details like rate limits or error cases to be fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning beyond the schema by explaining 'query' as 'The question or prompt to send to Perplexity' and 'messages' as 'Optional conversation context (list of {"role": "user/assistant", "content": "..."})'. This provides clear semantics for both parameters, effectively compensating for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Quick and reliable queries using Perplexity's sonar-pro model.' It specifies the action (queries) and resource (Perplexity's model), but doesn't explicitly differentiate from sibling tools (perplexity_large, perplexity_medium) beyond mentioning 'small' in the name and 'optimal speed and cost-effectiveness.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Best for: Fast factual questions, basic research, immediate answers.' It also mentions 'Uses default parameters for optimal speed and cost-effectiveness,' which helps distinguish it from alternatives. This clearly indicates when to use this tool versus potential siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Rohit-Seelam/Perplexity_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server