Skip to main content
Glama
Jercik

Perplexity Agent MCP

by Jercik

answer

Research complex technical questions, compare options, and provide evidence-based recommendations with implementation steps for architecture decisions, migrations, and debugging.

Instructions

Research a question, compare options, and recommend a path (backed by sources). Use for library choices, architecture trade-offs, migrations, complex debugging, and performance decisions. Returns a concise recommendation, a brief why, and short how-to steps.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
questionYesThe decision or problem to answer

Implementation Reference

  • The handler function for the 'answer' tool. It takes a question, performs a chat completion using Perplexity's sonar-reasoning-pro model with the ANSWER_SYSTEM_PROMPT, and returns the result as text content.
    async ({ question }) => {
      const result = await performChatCompletion(
        [{ role: "user", content: question }],
        {
          model: "sonar-reasoning-pro",
          system: ANSWER_SYSTEM_PROMPT,
          searchContextSize: "high",
        },
      );
      return { content: [{ type: "text", text: result }] };
    },
  • Tool schema definition including description and inputSchema with 'question' parameter using Zod validation.
        {
          description: `
    Researches a question, compares options, and recommends a path (backed by sources).
    Use for library choices, architecture trade-offs, migrations, complex debugging, and performance decisions.
    Returns a concise recommendation, a brief why, and short how-to steps.
    Examples: "Should I use Zod or Valibot?", "How to optimize React bundle size?", "Best auth approach for Node.js microservices?"
    One question per call—split combined requests into separate queries.
    `.trim(),
          inputSchema: {
            question: z.string().describe("The decision or problem to answer"),
          },
        },
  • The server.registerTool call within registerAnswerTool that performs the actual MCP tool registration for 'answer'.
      server.registerTool(
        "answer",
        {
          description: `
    Researches a question, compares options, and recommends a path (backed by sources).
    Use for library choices, architecture trade-offs, migrations, complex debugging, and performance decisions.
    Returns a concise recommendation, a brief why, and short how-to steps.
    Examples: "Should I use Zod or Valibot?", "How to optimize React bundle size?", "Best auth approach for Node.js microservices?"
    One question per call—split combined requests into separate queries.
    `.trim(),
          inputSchema: {
            question: z.string().describe("The decision or problem to answer"),
          },
        },
        async ({ question }) => {
          const result = await performChatCompletion(
            [{ role: "user", content: question }],
            {
              model: "sonar-reasoning-pro",
              system: ANSWER_SYSTEM_PROMPT,
              searchContextSize: "high",
            },
          );
          return { content: [{ type: "text", text: result }] };
        },
      );
  • src/server.ts:17-18 (registration)
    Invocation of registerAnswerTool during MCP server creation to register the 'answer' tool.
    registerLookupTool(server);
    registerAnswerTool(server);
  • The system prompt used in the 'answer' tool handler for guiding the AI in technical decision making.
    export const ANSWER_SYSTEM_PROMPT = `
    # Role: Technical Decision & Analysis Agent
    Research complex questions, compare approaches, and provide actionable recommendations. Optimized for:
    - Architecture decisions and design patterns
    - Library/framework selection and migration paths
    - Performance optimization strategies
    - Debugging complex issues across systems
    - Best practices and trade-off analysis
    
    # Instructions
    - Start with a brief analysis plan (3-5 conceptual steps) to structure your research
    - Search multiple sources to compare different approaches
    - Analyze real-world usage patterns in popular repositories
    - Weigh trade-offs based on the user's specific constraints
    - Provide a decisive recommendation with clear justification
    
    # Output Structure
    - **Recommendation:** Your advised approach in 1-2 sentences
    - **Why:** Key reasons with evidence from source code or benchmarks
    - **Implementation:** Practical steps with working code example
    - **Trade-offs:** What you gain vs what you sacrifice
    - **Alternatives:** Other viable options if constraints change
    
    ${AUTHORITATIVE_SOURCES}
    
    # Guidance
    - Use modern ESM and TypeScript for examples by default, but adapt language and examples as appropriate to the question.
    - Be decisive in your conclusions, but transparent about any uncertainty.
    - Present only your final conclusions and justification—avoid extraneous commentary or process narration.
    `.trim();
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the tool's behavior as involving research, comparison, and recommendation backed by sources, and specifies the return format ('concise recommendation, a brief why, and short how-to steps'). However, it doesn't cover other important aspects like whether it requires external data access, potential rate limits, or error handling. With no annotations, this is a moderate but incomplete disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with three sentences that each add value: the first states the core function, the second provides usage examples, and the third specifies the return format. There is no wasted text, and it efficiently conveys key information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a research and recommendation tool with no annotations and no output schema), the description is moderately complete. It explains the tool's purpose, usage, and return format, but lacks details on behavioral traits like data sources, limitations, or error cases. Without an output schema, it does describe the return values, which helps, but overall it could be more comprehensive for such a tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'question' parameter documented as 'The decision or problem to answer.' The description doesn't add any further details about parameters beyond what the schema provides. According to the rules, when schema_description_coverage is high (>80%), the baseline is 3 even with no param info in the description, which applies here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Research a question, compare options, and recommend a path (backed by sources).' It specifies the verb ('research, compare, recommend') and resource ('question'), making the function evident. However, it doesn't explicitly distinguish this from the sibling tool 'lookup', which might be a similar search or query function, so it lacks sibling differentiation for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Use for library choices, architecture trade-offs, migrations, complex debugging, and performance decisions.' This gives specific examples of applicable scenarios. However, it doesn't mention when not to use it or explicitly compare it to alternatives like the sibling tool 'lookup', so it falls short of the highest score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jercik/perplexity-agent-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server