Skip to main content
Glama
rileyedwards77

Perplexity AI MCP Server

search

Perform general search queries to obtain comprehensive information on any topic, with adjustable detail levels for tailored results.

Instructions

Perform a general search query to get comprehensive information on any topic

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe search query or question
detail_levelNoOptional: Desired level of detail (brief, normal, detailed)

Implementation Reference

  • Handler implementation for the 'search' tool. Performs a GET request to Perplexity's /search endpoint with the query and detail_level, returns the JSON response as text content.
    case "search": { const { query, detail_level = "normal" } = request.params.arguments; const response = yield this.axiosInstance.get(`/search?q=${query}&details=${detail_level}`); return { content: [ { type: "text", text: JSON.stringify(response.data, null, 2), }, ], }; }
  • Schema definition for the 'search' tool, including name, description, and input schema with required 'query' and optional 'detail_level'.
    { name: "search", description: "Perform a general search query to get comprehensive information on any topic", inputSchema: { type: "object", properties: { query: { type: "string", description: "The search query or question", }, detail_level: { type: "string", description: "Optional: Desired level of detail (brief, normal, detailed)", enum: ["brief", "normal", "detailed"], }, }, required: ["query"], }, },
  • Advanced handler for the 'search' tool using Perplexity's /chat/completions endpoint. Selects model based on detail_level, uses custom system prompt optimized for AI assistants, logs request/response.
    case "search": { const { query, detail_level = "normal" } = request.params.arguments as { query: string; detail_level?: string; }; // Map detail level to model const model = detail_level === "detailed" ? "sonar-reasoning-pro" : // Most expensive, best reasoning detail_level === "brief" ? "sonar" : // Basic, cheapest at $1/$1 "sonar-reasoning"; // Middle ground at $1/$5 // System prompt optimized for Claude const systemPrompt = `You are providing search results to Claude, an AI assistant. Skip unnecessary explanations - Claude can interpret and explain the data itself.`; // Call Perplexity API // Note: max_tokens could be increased for detailed responses, but consider cost implications // sonar-reasoning-pro can use >1000 tokens and does multiple searches console.error('Sending request:', JSON.stringify({ model, messages: [ { role: "system", content: systemPrompt }, { role: "user", content: query } ], max_tokens: 1000, temperature: 0.2, top_p: 0.9 }, null, 2)); const response = await this.axiosInstance.post('/chat/completions', { model, messages: [ { role: "system", content: systemPrompt }, { role: "user", content: query } ], max_tokens: 1000, temperature: 0.2, top_p: 0.9 }); console.error('Got response:', response.data); return { content: [ { type: "text", text: JSON.stringify(response.data, null, 2), }, ], }; }
  • Schema definition for the 'search' tool, matching the JS version, defining input parameters for query and optional detail_level.
    { name: "search", description: "Perform a general search query to get comprehensive information on any topic", inputSchema: { type: "object", properties: { query: { type: "string", description: "The search query or question", }, detail_level: { type: "string", description: "Optional: Desired level of detail (brief, normal, detailed)", enum: ["brief", "normal", "detailed"], }, }, required: ["query"], }, },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rileyedwards77/perplexity-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server