MCP Perplexity Search

by spences10
Verified

chat_completion

Generate chat completions using the Perplexity API

Input Schema

NameRequiredDescriptionDefault
custom_templateNoCustom prompt template. If provided, overrides prompt_template.
formatNoResponse format. Use json for structured data, markdown for formatted text with code blocks. Overrides template format if provided.text
include_sourcesNoInclude source URLs in the response. Overrides template setting if provided.
max_tokensNoThe maximum number of tokens to generate in the response. One token is roughly 4 characters for English text.
messagesYes
modelNoModel to use for completion. Note: llama-3.1 models will be deprecated after 2/22/2025sonar
prompt_templateNoPredefined prompt template to use for common use cases. Available templates: - technical_docs: Technical documentation with code examples and source references - security_practices: Security best practices and implementation guidelines with references - code_review: Code analysis focusing on best practices and improvements - api_docs: API documentation in structured JSON format with examples
temperatureNoControls randomness in the output. Higher values (e.g. 0.8) make the output more random, while lower values (e.g. 0.2) make it more focused and deterministic.

Input Schema (JSON Schema)

{ "properties": { "custom_template": { "description": "Custom prompt template. If provided, overrides prompt_template.", "properties": { "format": { "description": "Response format", "enum": [ "text", "markdown", "json" ], "type": "string" }, "include_sources": { "description": "Whether to include source URLs in responses", "type": "boolean" }, "system": { "description": "System message that sets the assistant's role and behavior", "type": "string" } }, "required": [ "system" ], "type": "object" }, "format": { "default": "text", "description": "Response format. Use json for structured data, markdown for formatted text with code blocks. Overrides template format if provided.", "enum": [ "text", "markdown", "json" ], "type": "string" }, "include_sources": { "default": false, "description": "Include source URLs in the response. Overrides template setting if provided.", "type": "boolean" }, "max_tokens": { "default": 1024, "description": "The maximum number of tokens to generate in the response. One token is roughly 4 characters for English text.", "maximum": 4096, "minimum": 1, "type": "number" }, "messages": { "items": { "properties": { "content": { "type": "string" }, "role": { "enum": [ "system", "user", "assistant" ], "type": "string" } }, "required": [ "role", "content" ], "type": "object" }, "type": "array" }, "model": { "default": "sonar", "description": "Model to use for completion. Note: llama-3.1 models will be deprecated after 2/22/2025", "enum": [ "sonar-pro", "sonar", "llama-3.1-sonar-small-128k-online", "llama-3.1-sonar-large-128k-online", "llama-3.1-sonar-huge-128k-online" ], "type": "string" }, "prompt_template": { "description": "Predefined prompt template to use for common use cases. Available templates:\n- technical_docs: Technical documentation with code examples and source references\n- security_practices: Security best practices and implementation guidelines with references\n- code_review: Code analysis focusing on best practices and improvements\n- api_docs: API documentation in structured JSON format with examples", "enum": [ "technical_docs", "security_practices", "code_review", "api_docs" ], "type": "string" }, "temperature": { "default": 0.7, "description": "Controls randomness in the output. Higher values (e.g. 0.8) make the output more random, while lower values (e.g. 0.2) make it more focused and deterministic.", "maximum": 1, "minimum": 0, "type": "number" } }, "required": [ "messages" ], "type": "object" }

You must be authenticated.

Other Tools