Skip to main content
Glama
DaInfernalCoder

MCP-researcher Server

reason

Analyzes complex queries using reasoning models to provide detailed explanations, comparisons, and step-by-step problem-solving solutions.

Instructions

Handles complex, multi-step tasks using Perplexity's Sonar Reasoning Pro model. Best for explanations, comparisons, and problem-solving.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe complex query or task to reason about. IMPORTANT: Be extremely specific and include all relevant details: - Include exact error messages, logs, and stack traces if applicable - Provide exact terminology, function names, API names, version numbers - Include relevant code snippets showing the problem or context - Specify platform, OS, framework versions, and environment details - Mention any attempted solutions or workarounds - Provide context about what you're trying to achieve - Include relevant data structures, configurations, or inputs The more specific details you include, the more accurate and helpful the answer will be. If you don't have enough specific information, prompt the user to provide it before using this tool.
force_modelNoOptional: Force using this model even if query seems simple/research-oriented

Implementation Reference

  • Handler logic for the 'reason' tool: sets the Perplexity model to 'sonar-reasoning-pro' and constructs a detailed multi-step reasoning prompt based on the input query.
              case "reason": {
                model = "sonar-reasoning-pro";
                prompt = `You are answering a query that contains specific details like error messages, logs, code snippets, exact terminology, version numbers, and context. Carefully analyze all provided details to give the most accurate and helpful answer.
    
    Query: ${query}
    
    Provide a detailed explanation and analysis that:
    1. Addresses the specific details provided (errors, logs, code, versions, etc.)
    2. Includes step-by-step reasoning based on the actual context
    3. Identifies key considerations relevant to the specific situation
    4. Provides relevant examples matching the described scenario
    5. Offers practical implications based on the exact details provided
    6. Suggests potential alternatives or solutions tailored to the specific context`;
                break;
  • Input schema for the 'reason' tool, defining the 'query' parameter as required string and optional 'force_model' boolean.
    inputSchema: {
      type: "object",
      properties: {
        query: {
          type: "string",
          description: "The complex query or task to reason about. IMPORTANT: Be extremely specific and include all relevant details:\n- Include exact error messages, logs, and stack traces if applicable\n- Provide exact terminology, function names, API names, version numbers\n- Include relevant code snippets showing the problem or context\n- Specify platform, OS, framework versions, and environment details\n- Mention any attempted solutions or workarounds\n- Provide context about what you're trying to achieve\n- Include relevant data structures, configurations, or inputs\n\nThe more specific details you include, the more accurate and helpful the answer will be.\nIf you don't have enough specific information, prompt the user to provide it before using this tool."
        },
        force_model: {
          type: "boolean",
          description: "Optional: Force using this model even if query seems simple/research-oriented",
          default: false
        }
      },
      required: ["query"]
    }
  • src/index.ts:282-300 (registration)
    Registration of the 'reason' tool in the MCP server's listTools response, including name, description, and input schema.
    {
      name: "reason",
      description: "Handles complex, multi-step tasks using Perplexity's Sonar Reasoning Pro model. Best for explanations, comparisons, and problem-solving.",
      inputSchema: {
        type: "object",
        properties: {
          query: {
            type: "string",
            description: "The complex query or task to reason about. IMPORTANT: Be extremely specific and include all relevant details:\n- Include exact error messages, logs, and stack traces if applicable\n- Provide exact terminology, function names, API names, version numbers\n- Include relevant code snippets showing the problem or context\n- Specify platform, OS, framework versions, and environment details\n- Mention any attempted solutions or workarounds\n- Provide context about what you're trying to achieve\n- Include relevant data structures, configurations, or inputs\n\nThe more specific details you include, the more accurate and helpful the answer will be.\nIf you don't have enough specific information, prompt the user to provide it before using this tool."
          },
          force_model: {
            type: "boolean",
            description: "Optional: Force using this model even if query seems simple/research-oriented",
            default: false
          }
        },
        required: ["query"]
      }
    },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions using 'Perplexity's Sonar Reasoning Pro model' and hints at complexity handling, but lacks details on performance traits (e.g., latency, rate limits), error handling, or output format. It adds some context but falls short of fully describing behavioral aspects for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of two sentences that directly state the tool's purpose and best-use cases. Every sentence earns its place by providing essential information without waste, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations and no output schema, the description is incomplete. It covers purpose and usage well but lacks details on behavioral traits, output format, or error handling. For a tool with 2 parameters and no structured metadata, the description should do more to compensate, leaving gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description does not add specific parameter semantics beyond what's in the schema, but since coverage is high, the baseline is 3. It earns a 4 because the description implicitly reinforces the importance of the 'query' parameter by emphasizing complex tasks, adding slight contextual value without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'handles complex, multi-step tasks' using a specific reasoning model, with examples of use cases (explanations, comparisons, problem-solving). It distinguishes from 'deep_research' and 'search' by emphasizing reasoning over research or simple search, though not explicitly naming alternatives. The purpose is specific but could be more explicit about sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Best for explanations, comparisons, and problem-solving'), implying it's suited for complex reasoning tasks. However, it does not explicitly state when not to use it or name alternatives like 'deep_research' or 'search', missing explicit exclusions or comparisons. The guidance is strong but not fully comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DaInfernalCoder/perplexity-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server