Skip to main content
Glama
letsbuildagent

Perplexity Tool for Claude Desktop

ask_perplexity

Get answers to questions with citations by querying Perplexity AI through Claude Desktop for web-based research and information retrieval.

Instructions

Ask a question to Perplexity AI

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
questionYesThe question to ask
temperatureNoResponse randomness (0-2)
max_tokensNoMaximum tokens in response
search_domain_filterNoLimit search to specific domains
search_recency_filterNoFilter results by recencymonth

Implementation Reference

  • The core handler function for the 'ask_perplexity' tool. It destructures arguments, makes a POST request to the Perplexity API, processes the response including answer, citations, and token usage, and returns a formatted string.
    async function askPerplexity(args) {
        const { question, temperature = 0.2, max_tokens = 1000, search_domain_filter = [], search_recency_filter = "month" } = args;
        const response = await fetch("https://api.perplexity.ai/chat/completions", {
            method: "POST",
            headers: {
                "Authorization": `Bearer ${PERPLEXITY_API_KEY}`,
                "Content-Type": "application/json"
            },
            body: JSON.stringify({
                model: "llama-3.1-sonar-small-128k-online",
                messages: [
                    {
                        role: "system",
                        content: "You are a world-class researcher with strong attention to details"
                    },
                    {
                        role: "user",
                        content: question
                    }
                ],
                max_tokens,
                temperature,
                top_p: 0.9,
                stream: false,
                search_domain_filter,
                search_recency_filter,
                return_images: false,
                return_related_questions: false,
                frequency_penalty: 1,
                presence_penalty: 0
            })
        });
        if (!response.ok) {
            throw new Error(`Perplexity API error: ${response.status} ${response.statusText}`);
        }
        const result = await response.json();
        // Extract answer and citations
        const answer = result.choices[0].message.content;
        const citations = result.citations || [];
        const tokenUsage = result.usage || {};
        // Format response
        const fullResponse = [
            `Answer: ${answer}\n`,
            "\nSources:",
            ...citations.map((citation, i) => `${i + 1}. ${citation}`),
            "\nToken Usage:",
            `- Prompt tokens: ${tokenUsage.prompt_tokens || 'N/A'}`,
            `- Completion tokens: ${tokenUsage.completion_tokens || 'N/A'}`,
            `- Total tokens: ${tokenUsage.total_tokens || 'N/A'}`
        ].join('\n');
        return fullResponse;
    }
  • The tool definition object containing the name, description, and inputSchema for validating arguments to 'ask_perplexity'.
    const PERPLEXITY_TOOL = {
        name: "ask_perplexity",
        description: "Ask a question to Perplexity AI",
        inputSchema: {
            type: "object",
            properties: {
                question: {
                    type: "string",
                    description: "The question to ask"
                },
                temperature: {
                    type: "number",
                    description: "Response randomness (0-2)",
                    default: 0.2
                },
                max_tokens: {
                    type: "integer",
                    description: "Maximum tokens in response",
                    default: 1000
                },
                search_domain_filter: {
                    type: "array",
                    items: { type: "string" },
                    description: "Limit search to specific domains",
                    default: []
                },
                search_recency_filter: {
                    type: "string",
                    enum: ["day", "week", "month", "year"],
                    description: "Filter results by recency",
                    default: "month"
                }
            },
            required: ["question"],
        },
    };
  • server.js:115-117 (registration)
    Registration of the tool in the ListToolsRequestSchema handler, exposing the PERPLEXITY_TOOL schema to clients.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({
        tools: [PERPLEXITY_TOOL],
    }));
  • The MCP CallToolRequestSchema handler that dispatches to askPerplexity based on tool name and handles errors.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
        try {
            const { name, arguments: args } = request.params;
            if (!args) {
                throw new Error("No arguments provided");
            }
            if (name === "ask_perplexity") {
                if (!isPerplexityArgs(args)) {
                    throw new Error("Invalid arguments for ask_perplexity");
                }
                const results = await askPerplexity(args);
                return {
                    content: [{ type: "text", text: results }],
                    isError: false,
                };
            }
            return {
                content: [{ type: "text", text: `Unknown tool: ${name}` }],
                isError: true,
            };
        }
        catch (error) {
            return {
                content: [
                    {
                        type: "text",
                        text: `Error: ${error instanceof Error ? error.message : String(error)}`,
                    },
                ],
                isError: true,
            };
        }
    });
  • Helper function to validate arguments for the ask_perplexity tool.
    function isPerplexityArgs(args) {
        return (typeof args === "object" &&
            args !== null &&
            "question" in args &&
            typeof args.question === "string");
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'Ask a question to Perplexity AI,' which implies it's a query tool, but doesn't describe what happens—e.g., whether it performs web searches, generates responses, has rate limits, or requires authentication. This is a significant gap for a tool with multiple parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It's front-loaded and efficiently conveys the core action, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain what the tool returns, how it behaves (e.g., search-based vs. generative), or any constraints. The agent must rely heavily on the schema and tool name, which is insufficient for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so parameters like 'question,' 'temperature,' and 'search_recency_filter' are well-documented in the schema. The description adds no additional meaning beyond the schema, such as explaining how parameters interact or typical use cases. This meets the baseline of 3 since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Ask a question to Perplexity AI' clearly states the action (ask) and target (Perplexity AI), which is adequate. However, it's somewhat vague about what Perplexity AI is or does—it doesn't specify if this is for general queries, research, or something else. With no sibling tools, differentiation isn't needed, but the purpose could be more specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool—it doesn't mention use cases, prerequisites, or alternatives. With no sibling tools, there's no need to differentiate, but it lacks any context for appropriate usage, leaving the agent to infer based on the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/letsbuildagent/perplexity-tool'

If you have feedback or need assistance with the MCP directory API, please join our Discord server