Skip to main content
Glama
xiyuefox

Perplexity MCP Server

ask_perplexity

Access accurate, source-backed information online for research, fact-checking, or decision-making. Responses include citations and diverse perspectives for reliable insights.

Instructions

Perplexity equips agents with a specialized tool for efficiently gathering source-backed information from the internet, ideal for scenarios requiring research, fact-checking, or contextual data to inform decisions and responses. Each response includes citations, which provide transparent references to the sources used for the generated answer, and choices, which contain the model's suggested responses, enabling users to access reliable information and diverse perspectives. This function may encounter timeout errors due to long processing times, but retrying the operation can lead to successful completion. [Response structure]

  • id: An ID generated uniquely for each response.

  • model: The model used to generate the response.

  • object: The object type, which always equals chat.completion.

  • created: The Unix timestamp (in seconds) of when the completion was created.

  • citations[]: Citations for the generated answer.

  • choices[]: The list of completion choices the model generated for the input prompt.

  • usage: Usage statistics for the completion request.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
messagesYesA list of messages comprising the conversation so far.
modelYesThe name of the model that will complete your prompt.

Implementation Reference

  • The tool handler function decorated with @server.call_tool(). It checks if the tool name is 'ask_perplexity' and makes an asynchronous HTTP POST request to the Perplexity API using the provided arguments, returning the response as TextContent.
    @server.call_tool() async def handle_call_tool( name: str, arguments: dict ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]: if name != "ask_perplexity": raise ValueError(f"Unknown tool: {name}") try: async with httpx.AsyncClient() as client: response = await client.post( f"{PERPLEXITY_API_BASE_URL}/chat/completions", headers={ "Authorization": f"Bearer {PERPLEXITY_API_KEY}", "Content-Type": "application/json", }, json=arguments, timeout=None, ) response.raise_for_status() except httpx.HTTPError as e: raise RuntimeError(f"API error: {str(e)}") return [ types.TextContent( type="text", text=response.text, ) ]
  • The tool registration via @server.list_tools(), defining the 'ask_perplexity' tool with its name, description, and input schema.
    @server.list_tools() async def handle_list_tools() -> list[types.Tool]: return [ types.Tool( name="ask_perplexity", description=dedent( """ Perplexity equips agents with a specialized tool for efficiently gathering source-backed information from the internet, ideal for scenarios requiring research, fact-checking, or contextual data to inform decisions and responses. Each response includes citations, which provide transparent references to the sources used for the generated answer, and choices, which contain the model's suggested responses, enabling users to access reliable information and diverse perspectives. This function may encounter timeout errors due to long processing times, but retrying the operation can lead to successful completion. [Response structure] - id: An ID generated uniquely for each response. - model: The model used to generate the response. - object: The object type, which always equals `chat.completion`. - created: The Unix timestamp (in seconds) of when the completion was created. - citations[]: Citations for the generated answer. - choices[]: The list of completion choices the model generated for the input prompt. - usage: Usage statistics for the completion request. """ ), inputSchema={ "type": "object", "properties": { "model": { "type": "string", "description": "The name of the model that will complete your prompt.", "enum": [ "llama-3.1-sonar-small-128k-online", # Commenting out larger models,which have higher risks of timing out, # until Claude Desktop can handle long-running tasks effectively. # "llama-3.1-sonar-large-128k-online", # "llama-3.1-sonar-huge-128k-online", ], }, "messages": { "type": "array", "description": "A list of messages comprising the conversation so far.", "items": { "type": "object", "properties": { "content": { "type": "string", "description": "The contents of the message in this turn of conversation.", }, "role": { "type": "string", "description": "The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.", "enum": ["system", "user", "assistant"], }, }, "required": ["content", "role"], }, }, }, "required": ["model", "messages"], }, ) ]
  • The input schema for the 'ask_perplexity' tool, defining properties for 'model' and 'messages'.
    inputSchema={ "type": "object", "properties": { "model": { "type": "string", "description": "The name of the model that will complete your prompt.", "enum": [ "llama-3.1-sonar-small-128k-online", # Commenting out larger models,which have higher risks of timing out, # until Claude Desktop can handle long-running tasks effectively. # "llama-3.1-sonar-large-128k-online", # "llama-3.1-sonar-huge-128k-online", ], }, "messages": { "type": "array", "description": "A list of messages comprising the conversation so far.", "items": { "type": "object", "properties": { "content": { "type": "string", "description": "The contents of the message in this turn of conversation.", }, "role": { "type": "string", "description": "The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.", "enum": ["system", "user", "assistant"], }, }, "required": ["content", "role"], }, }, }, "required": ["model", "messages"], },

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xiyuefox/mcp-server-perplexity'

If you have feedback or need assistance with the MCP directory API, please join our Discord server