ask_perplexity
Access accurate, source-backed information online for research, fact-checking, or decision-making. Responses include citations and diverse perspectives for reliable insights.
Instructions
Perplexity equips agents with a specialized tool for efficiently gathering source-backed information from the internet, ideal for scenarios requiring research, fact-checking, or contextual data to inform decisions and responses. Each response includes citations, which provide transparent references to the sources used for the generated answer, and choices, which contain the model's suggested responses, enabling users to access reliable information and diverse perspectives. This function may encounter timeout errors due to long processing times, but retrying the operation can lead to successful completion. [Response structure]
id: An ID generated uniquely for each response.
model: The model used to generate the response.
object: The object type, which always equals
chat.completion.created: The Unix timestamp (in seconds) of when the completion was created.
citations[]: Citations for the generated answer.
choices[]: The list of completion choices the model generated for the input prompt.
usage: Usage statistics for the completion request.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| messages | Yes | A list of messages comprising the conversation so far. | |
| model | Yes | The name of the model that will complete your prompt. |
Implementation Reference
- The tool handler function decorated with @server.call_tool(). It checks if the tool name is 'ask_perplexity' and makes an asynchronous HTTP POST request to the Perplexity API using the provided arguments, returning the response as TextContent.@server.call_tool() async def handle_call_tool( name: str, arguments: dict ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]: if name != "ask_perplexity": raise ValueError(f"Unknown tool: {name}") try: async with httpx.AsyncClient() as client: response = await client.post( f"{PERPLEXITY_API_BASE_URL}/chat/completions", headers={ "Authorization": f"Bearer {PERPLEXITY_API_KEY}", "Content-Type": "application/json", }, json=arguments, timeout=None, ) response.raise_for_status() except httpx.HTTPError as e: raise RuntimeError(f"API error: {str(e)}") return [ types.TextContent( type="text", text=response.text, ) ]
- src/mcp_server_perplexity/server.py:17-83 (registration)The tool registration via @server.list_tools(), defining the 'ask_perplexity' tool with its name, description, and input schema.@server.list_tools() async def handle_list_tools() -> list[types.Tool]: return [ types.Tool( name="ask_perplexity", description=dedent( """ Perplexity equips agents with a specialized tool for efficiently gathering source-backed information from the internet, ideal for scenarios requiring research, fact-checking, or contextual data to inform decisions and responses. Each response includes citations, which provide transparent references to the sources used for the generated answer, and choices, which contain the model's suggested responses, enabling users to access reliable information and diverse perspectives. This function may encounter timeout errors due to long processing times, but retrying the operation can lead to successful completion. [Response structure] - id: An ID generated uniquely for each response. - model: The model used to generate the response. - object: The object type, which always equals `chat.completion`. - created: The Unix timestamp (in seconds) of when the completion was created. - citations[]: Citations for the generated answer. - choices[]: The list of completion choices the model generated for the input prompt. - usage: Usage statistics for the completion request. """ ), inputSchema={ "type": "object", "properties": { "model": { "type": "string", "description": "The name of the model that will complete your prompt.", "enum": [ "llama-3.1-sonar-small-128k-online", # Commenting out larger models,which have higher risks of timing out, # until Claude Desktop can handle long-running tasks effectively. # "llama-3.1-sonar-large-128k-online", # "llama-3.1-sonar-huge-128k-online", ], }, "messages": { "type": "array", "description": "A list of messages comprising the conversation so far.", "items": { "type": "object", "properties": { "content": { "type": "string", "description": "The contents of the message in this turn of conversation.", }, "role": { "type": "string", "description": "The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.", "enum": ["system", "user", "assistant"], }, }, "required": ["content", "role"], }, }, }, "required": ["model", "messages"], }, ) ]
- The input schema for the 'ask_perplexity' tool, defining properties for 'model' and 'messages'.inputSchema={ "type": "object", "properties": { "model": { "type": "string", "description": "The name of the model that will complete your prompt.", "enum": [ "llama-3.1-sonar-small-128k-online", # Commenting out larger models,which have higher risks of timing out, # until Claude Desktop can handle long-running tasks effectively. # "llama-3.1-sonar-large-128k-online", # "llama-3.1-sonar-huge-128k-online", ], }, "messages": { "type": "array", "description": "A list of messages comprising the conversation so far.", "items": { "type": "object", "properties": { "content": { "type": "string", "description": "The contents of the message in this turn of conversation.", }, "role": { "type": "string", "description": "The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.", "enum": ["system", "user", "assistant"], }, }, "required": ["content", "role"], }, }, }, "required": ["model", "messages"], },