Skip to main content
Glama
freelancernasimofficial

NasCoder Perplexity MCP Ultra-Pro

perplexity_models

Browse and select from available Perplexity AI models with current descriptions to choose the right model for your task.

Instructions

List available Perplexity models with descriptions (2025 correct models)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Handler implementation for the 'perplexity_models' tool. Returns an object containing the available Perplexity models dictionary, a note about the models, and version information.
    case "perplexity_models":
      return {
        content: [{
          type: "object",
          data: {
            models: nascoderMCP.models,
            note: "These are the CORRECT 2025 Perplexity API models. Previous versions had incorrect model names.",
            version: "2.0.0"
          }
        }]
      };
  • index.js:692-700 (registration)
    Registration of the 'perplexity_models' tool in the TOOLS array. Defines the tool name, description, and empty input schema (no parameters required).
    {
      name: "perplexity_models",
      description: "List available Perplexity models with descriptions (2025 correct models)",
      inputSchema: {
        type: "object",
        properties: {},
        required: []
      }
    }
  • Input schema definition for the 'perplexity_models' tool, specifying an empty object (no input parameters required).
    inputSchema: {
      type: "object",
      properties: {},
      required: []
    }
  • Helper data structure defining all available Perplexity models with their descriptions, directly referenced and returned by the tool handler.
    this.models = {
      // Search Models (with web search)
      'sonar-pro': 'Advanced search offering with grounding, supporting complex queries and follow-ups (200k context)',
      'sonar': 'Lightweight, cost-effective search model with grounding (128k context)',
      
      // Research Models (deep analysis)
      'sonar-deep-research': 'Expert-level research model conducting exhaustive searches and generating comprehensive reports (128k context)',
      
      // Reasoning Models (complex problem solving)
      'sonar-reasoning-pro': 'Premier reasoning offering powered by DeepSeek R1 with Chain of Thought (CoT) (128k context)',
      'sonar-reasoning': 'Fast, real-time reasoning model designed for quick problem-solving with search (128k context)',
      
      // Offline Models (no web search)
      'r1-1776': 'A version of DeepSeek R1 post-trained for uncensored, unbiased, and factual information (128k context)'
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions 'List' and 'descriptions', implying a read-only operation, but doesn't disclose behavioral traits like whether it requires authentication, has rate limits, returns structured data, or handles errors. The '(2025 correct models)' hints at up-to-date information but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('List available Perplexity models with descriptions') and adds a clarifying note ('2025 correct models'). Every word earns its place, with no redundancy or waste, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is adequate but has gaps. It covers the basic purpose but lacks behavioral context (e.g., read-only nature, response format) and usage guidelines. Without annotations or output schema, more detail on what the list includes would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, with schema description coverage at 100%. The description doesn't need to add parameter semantics, as there are none to document. Baseline for 0 parameters is 4, as the description appropriately focuses on the tool's purpose without unnecessary parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List available Perplexity models with descriptions'. It specifies the verb ('List'), resource ('Perplexity models'), and scope ('with descriptions'), though it doesn't explicitly differentiate from sibling tools like perplexity_analytics or perplexity_ask_pro. The '(2025 correct models)' adds temporal accuracy but doesn't enhance core purpose clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools (e.g., perplexity_ask_pro for querying models) or suggest scenarios where listing models is appropriate, such as before selecting one for a task. Usage is implied by the purpose but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/freelancernasimofficial/nascoder-perplexity-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server