Skip to main content
Glama

Perplexica MCP Server

roo_task_may-30-2025_10-45-31-pm.md24.3 kB
**User:** <task> Given the following description of the arguments of the search endpoint of the perplexica server, refactor our MCP server: Request Parameters chatModel (object, optional): Defines the chat model to be used for the query. For model details you can send a GET request at http://localhost:3000/api/models. Make sure to use the key value (For example "gpt-4o-mini" instead of the display name "GPT 4 omni mini"). provider: Specifies the provider for the chat model (e.g., openai, ollama). name: The specific model from the chosen provider (e.g., gpt-4o-mini). Optional fields for custom OpenAI configuration: customOpenAIBaseURL: If you’re using a custom OpenAI instance, provide the base URL. customOpenAIKey: The API key for a custom OpenAI instance. embeddingModel (object, optional): Defines the embedding model for similarity-based searching. For model details you can send a GET request at http://localhost:3000/api/models. Make sure to use the key value (For example "text-embedding-3-large" instead of the display name "Text Embedding 3 Large"). provider: The provider for the embedding model (e.g., openai). name: The specific embedding model (e.g., text-embedding-3-large). focusMode (string, required): Specifies which focus mode to use. Available modes: webSearch, academicSearch, writingAssistant, wolframAlphaSearch, youtubeSearch, redditSearch. optimizationMode (string, optional): Specifies the optimization mode to control the balance between performance and quality. Available modes: speed: Prioritize speed and return the fastest answer. balanced: Provide a balanced answer with good speed and reasonable quality. query (string, required): The search query or question. systemInstructions (string, optional): Custom instructions provided by the user to guide the AI's response. These instructions are treated as user preferences and have lower priority than the system's core instructions. For example, you can specify a particular writing style, format, or focus area. history (array, optional): An array of message pairs representing the conversation history. Each pair consists of a role (either 'human' or 'assistant') and the message content. This allows the system to use the context of the conversation to refine results. Example: [ ["human", "What is Perplexica?"], ["assistant", "Perplexica is an AI-powered search engine..."] ] stream (boolean, optional): When set to true, enables streaming responses. Default is false. </task> <environment_details> # VSCode Visible Files perplexica_mcp_tool.py # VSCode Open Tabs perplexica_mcp_tool.py # Current Time 30.5.2025, 10:40:09 PM (Europe/Berlin, UTC+2:00) # Current Context Size (Tokens) (Not available) # Current Cost $0.00 # Current Mode <slug>ask</slug> <name>❓ Ask</name> <model>devstral-small-2505</model> NOTE: You are currently in '❓ Ask' mode, which does not allow write operations. To write files, the user will need to switch to a mode that supports file writing, such as '💻 Code' mode. # Current Workspace Directory (/Users/the/Developer/thetom42/github/perplexica-mcp) Files perplexica_mcp_tool.py pyproject.toml test_search.py uv.lock </environment_details> --- **Assistant:** I'll help you refactor the MCP server based on the provided description of the search endpoint arguments. However, I'm currently in "❓ Ask" mode, which doesn't allow write operations. To proceed with the refactoring, I'll need to switch to "💻 Code" mode. <switch_mode> <mode_slug>code</mode_slug> <reason>Need to refactor the MCP server based on the provided search endpoint arguments</reason> </switch_mode> --- **User:** [switch_mode to 'code' because: Need to refactor the MCP server based on the provided search endpoint arguments] Result: Successfully switched from ❓ Ask mode to 💻 Code mode because: Need to refactor the MCP server based on the provided search endpoint arguments. <environment_details> # VSCode Visible Files perplexica_mcp_tool.py # VSCode Open Tabs perplexica_mcp_tool.py # Current Time 30.5.2025, 10:40:22 PM (Europe/Berlin, UTC+2:00) # Current Context Size (Tokens) 23.189 (18%) # Current Cost $0.00 # Current Mode <slug>code</slug> <name>💻 Code</name> <model>devstral-small-2505</model> </environment_details> [TASK RESUMPTION] This task was interrupted 1 minute ago. It may or may not be complete, so please reassess the task context. Be aware that the project state may have changed since then. If the task has not been completed, retry the last step before interruption and proceed with completing the task. Note: If you previously attempted a tool use that the user did not provide a result for, you should assume the tool use was not successful and assess whether you should retry. If the last tool was a browser_action, the browser has been closed and you must launch a new browser if needed. <environment_details> # VSCode Visible Files perplexica_mcp_tool.py # VSCode Open Tabs perplexica_mcp_tool.py # Current Time 30.5.2025, 10:43:35 PM (Europe/Berlin, UTC+2:00) # Current Context Size (Tokens) 23.189 (12%) # Current Cost $0.00 # Current Mode <slug>code</slug> <name>💻 Code</name> <model>deepseek/deepseek-chat-v3-0324:free</model> # Current Workspace Directory (/Users/the/Developer/thetom42/github/perplexica-mcp) Files perplexica_mcp_tool.py pyproject.toml test_search.py uv.lock </environment_details> --- **Assistant:** I'll now proceed with refactoring the MCP server based on the provided search endpoint arguments. First, I'll examine the current implementation to understand what needs to be updated. <read_file> <path>perplexica_mcp_tool.py</path> </read_file> --- **User:** [read_file for 'perplexica_mcp_tool.py'] Result: <file><path>perplexica_mcp_tool.py</path> <content lines="1-106"> 1 | #!/usr/bin/env python3 2 | # /// script 3 | # dependencies = [ 4 | # "requests", 5 | # "mcp" 6 | # ] 7 | # /// 8 | 9 | import requests 10 | from mcp.server.fastmcp import FastMCP 11 | 12 | # Create an MCP server 13 | mcp = FastMCP("Perplexica") 14 | 15 | 16 | # Add the search tool 17 | @mcp.tool() 18 | def search(query: str, 19 | chat_model: dict = None, 20 | embedding_model: dict = None, 21 | optimization_mode: str = None, 22 | focus_mode: str = None, 23 | history: list = None, 24 | system_instructions: str = None, 25 | stream: bool = False 26 | ) -> dict: 27 | """ 28 | Search using Perplexica's AI-powered search engine 29 | 30 | Args: 31 | query (str): The search query or question 32 | chat_model (dict, optional): Chat model configuration 33 | embedding_model (dict, optional): Embedding model configuration 34 | optimization_mode (str, optional): Optimization mode 35 | focus_mode (str, required): Focus mode, one of ['webSearch', 'academicSearch', 'writingAssistant', 'wolframAlphaSearch', 'youtubeSearch', 'redditSearch'] 36 | history (list, optional): Conversation history 37 | system_instructions (str, optional): System instructions 38 | stream (bool, optional): Whether to stream responses 39 | 40 | Returns: 41 | dict: The search results 42 | """ 43 | return perplexica_search( 44 | query=query, 45 | chat_model=chat_model, 46 | embedding_model=embedding_model, 47 | optimization_mode=optimization_mode, 48 | focus_mode=focus_mode, 49 | history=history, 50 | system_instructions=system_instructions, 51 | stream=stream 52 | ) 53 | 54 | 55 | def perplexica_search(query, chat_model=None, 56 | embedding_model=None, 57 | optimization_mode=None, 58 | focus_mode=None, 59 | history=None, 60 | system_instructions=None, 61 | stream=False 62 | ) -> dict: 63 | """ 64 | Search using the Perplexica API 65 | 66 | Args: 67 | query (str): The search query 68 | chat_model (dict, optional): Chat model configuration 69 | embedding_model (dict, optional): Embedding model configuration 70 | optimization_mode (str, optional): Optimization mode 71 | focus_mode (str, optional): Focus mode 72 | history (list, optional): Conversation history 73 | system_instructions (str, optional): System instructions 74 | stream (bool, optional): Whether to stream responses 75 | 76 | Returns: 77 | dict: The search results 78 | """ 79 | url = 'http://localhost:3000/api/search' 80 | payload = { 81 | 'query': query 82 | } 83 | 84 | if chat_model: 85 | payload['chatModel'] = chat_model 86 | if embedding_model: 87 | payload['embeddingModel'] = embedding_model 88 | if optimization_mode: 89 | payload['optimizationMode'] = optimization_mode 90 | if focus_mode: 91 | payload['focusMode'] = focus_mode 92 | if history: 93 | payload['history'] = history 94 | if system_instructions: 95 | payload['systemInstructions'] = system_instructions 96 | if stream: 97 | payload['stream'] = stream 98 | 99 | response = requests.post(url, json=payload) 100 | response.raise_for_status() 101 | return response.json() 102 | 103 | 104 | if __name__ == "__main__": 105 | # Run the MCP server 106 | mcp.run() </content> </file> <environment_details> # VSCode Visible Files perplexica_mcp_tool.py # VSCode Open Tabs perplexica_mcp_tool.py # Current Time 30.5.2025, 10:43:43 PM (Europe/Berlin, UTC+2:00) # Current Context Size (Tokens) 19.629 (12%) # Current Cost $0.00 # Current Mode <slug>code</slug> <name>💻 Code</name> <model>deepseek/deepseek-chat-v3-0324:free</model> </environment_details> --- **Assistant:** Failure: I did not provide a response. --- **User:** [ERROR] You did not use a tool in your previous response! Please retry with a tool use. # Reminder: Instructions for Tool Use Tool uses are formatted using XML-style tags. The tool name itself becomes the XML tag name. Each parameter is enclosed within its own set of tags. Here's the structure: <actual_tool_name> <parameter1_name>value1</parameter1_name> <parameter2_name>value2</parameter2_name> ... </actual_tool_name> For example, to use the attempt_completion tool: <attempt_completion> <result> I have completed the task... </result> </attempt_completion> Always use the actual tool name as the XML tag name for proper parsing and execution. # Next Steps If you have completed the user's task, use the attempt_completion tool. If you require additional information from the user, use the ask_followup_question tool. Otherwise, if you have not completed the task and do not need additional information, then proceed with the next step of the task. (This is an automated message, so do not respond to it conversationally.) <environment_details> # VSCode Visible Files perplexica_mcp_tool.py # VSCode Open Tabs perplexica_mcp_tool.py # Current Time 30.5.2025, 10:44:00 PM (Europe/Berlin, UTC+2:00) # Current Context Size (Tokens) 19.517 (12%) # Current Cost $0.00 # Current Mode <slug>code</slug> <name>💻 Code</name> <model>deepseek/deepseek-chat-v3-0324:free</model> </environment_details> --- **Assistant:** I'll now proceed with refactoring the MCP server based on the provided search endpoint arguments. I'll update the docstring and parameter validation to match the new requirements. <apply_diff> <path>perplexica_mcp_tool.py</path> <diff> <<<<<<< SEARCH :start_line:16 ------- # Add the search tool @mcp.tool() def search(query: str, chat_model: dict = None, embedding_model: dict = None, optimization_mode: str = None, focus_mode: str = None, history: list = None, system_instructions: str = None, stream: bool = False ) -> dict: """ Search using Perplexica's AI-powered search engine Args: query (str): The search query or question chat_model (dict, optional): Chat model configuration embedding_model (dict, optional): Embedding model configuration optimization_mode (str, optional): Optimization mode focus_mode (str, required): Focus mode, one of ['webSearch', 'academicSearch', 'writingAssistant', 'wolframAlphaSearch', 'youtubeSearch', 'redditSearch'] history (list, optional): Conversation history system_instructions (str, optional): System instructions stream (bool, optional): Whether to stream responses Returns: dict: The search results """ ======= # Add the search tool @mcp.tool() def search(query: str, chat_model: dict = None, embedding_model: dict = None, optimization_mode: str = None, focus_mode: str = None, history: list = None, system_instructions: str = None, stream: bool = False ) -> dict: """ Search using Perplexica's AI-powered search engine Args: query (str): The search query or question chat_model (dict, optional): Chat model configuration with: provider: Provider name (e.g., openai, ollama) name: Model name (e.g., gpt-4o-mini) customOpenAIBaseURL: Optional custom OpenAI base URL customOpenAIKey: Optional custom OpenAI API key embedding_model (dict, optional): Embedding model configuration with: provider: Provider name (e.g., openai) name: Model name (e.g., text-embedding-3-large) optimization_mode (str, optional): Optimization mode ('speed' or 'balanced') focus_mode (str, required): Focus mode, must be one of: 'webSearch', 'academicSearch', 'writingAssistant', 'wolframAlphaSearch', 'youtubeSearch', 'redditSearch' history (list, optional): Conversation history as list of [role, message] pairs system_instructions (str, optional): Custom instructions for AI response stream (bool, optional): Whether to stream responses (default: False) Returns: dict: The search results """ >>>>>>> REPLACE <<<<<<< SEARCH :start_line:55 ------- def perplexica_search(query, chat_model=None, embedding_model=None, optimization_mode=None, focus_mode=None, history=None, system_instructions=None, stream=False ) -> dict: """ Search using the Perplexica API Args: query (str): The search query chat_model (dict, optional): Chat model configuration embedding_model (dict, optional): Embedding model configuration optimization_mode (str, optional): Optimization mode focus_mode (str, optional): Focus mode history (list, optional): Conversation history system_instructions (str, optional): System instructions stream (bool, optional): Whether to stream responses Returns: dict: The search results """ ======= def perplexica_search(query, chat_model=None, embedding_model=None, optimization_mode=None, focus_mode=None, history=None, system_instructions=None, stream=False ) -> dict: """ Search using the Perplexica API Args: query (str): The search query chat_model (dict, optional): Chat model configuration with: provider: Provider name (e.g., openai, ollama) name: Model name (e.g., gpt-4o-mini) customOpenAIBaseURL: Optional custom OpenAI base URL customOpenAIKey: Optional custom OpenAI API key embedding_model (dict, optional): Embedding model configuration with: provider: Provider name (e.g., openai) name: Model name (e.g., text-embedding-3-large) optimization_mode (str, optional): Optimization mode ('speed' or 'balanced') focus_mode (str, optional): Focus mode, must be one of: 'webSearch', 'academicSearch', 'writingAssistant', 'wolframAlphaSearch', 'youtubeSearch', 'redditSearch' history (list, optional): Conversation history as list of [role, message] pairs system_instructions (str, optional): Custom instructions for AI response stream (bool, optional): Whether to stream responses (default: False) Returns: dict: The search results """ >>>>>>> REPLACE </diff> </apply_diff> --- **User:** [apply_diff for 'perplexica_mcp_tool.py'] Result: Changes successfully applied to perplexica_mcp_tool.py: <environment_details> # VSCode Visible Files perplexica_mcp_tool.py # VSCode Open Tabs perplexica_mcp_tool.py # Current Time 30.5.2025, 10:44:50 PM (Europe/Berlin, UTC+2:00) # Current Context Size (Tokens) 21.069 (13%) # Current Cost $0.00 # Current Mode <slug>code</slug> <name>💻 Code</name> <model>deepseek/deepseek-chat-v3-0324:free</model> </environment_details> --- **Assistant:** I'll now update the test file to ensure it matches the new API specifications. <read_file> <path>test_search.py</path> </read_file> --- **User:** [read_file for 'test_search.py'] Result: <file><path>test_search.py</path> <content lines="1-30"> 1 | #!/usr/bin/env python3 2 | import perplexica_mcp_tool 3 | 4 | def test_search_perplexica(): 5 | payload = { 6 | 'query': 'What is Perplexica?', 7 | 'chatModel': {'type': 'gpt-3.5-turbo'}, 8 | 'embeddingModel': {'type': 'text-embedding-ada-002'}, 9 | 'optimizationMode': 'accuracy', 10 | 'focusMode': 'webSearch', 11 | 'history': [], 12 | 'systemInstructions': '', 13 | 'stream': False 14 | } 15 | 16 | response = perplexica_mcp_tool.perplexica_search( 17 | query=payload['query'], 18 | chat_model=payload['chatModel'], 19 | embedding_model=payload['embeddingModel'], 20 | optimization_mode=payload['optimizationMode'], 21 | focus_mode=payload['focusMode'], 22 | history=payload['history'], 23 | system_instructions=payload['systemInstructions'], 24 | stream=payload['stream'] 25 | ) 26 | print(f'Status code: {response.status_code}') 27 | print(f'Response: {response.json()}') 28 | 29 | if __name__ == '__main__': 30 | test_search_perplexica() </content> </file> <environment_details> # VSCode Visible Files perplexica_mcp_tool.py # VSCode Open Tabs perplexica_mcp_tool.py # Current Time 30.5.2025, 10:44:57 PM (Europe/Berlin, UTC+2:00) # Current Context Size (Tokens) 21.272 (13%) # Current Cost $0.00 # Current Mode <slug>code</slug> <name>💻 Code</name> <model>deepseek/deepseek-chat-v3-0324:free</model> </environment_details> --- **Assistant:** I'll update the test file to match the new API specifications and fix the test case. <apply_diff> <path>test_search.py</path> <diff> <<<<<<< SEARCH :start_line:4 ------- def test_search_perplexica(): payload = { 'query': 'What is Perplexica?', 'chatModel': {'type': 'gpt-3.5-turbo'}, 'embeddingModel': {'type': 'text-embedding-ada-002'}, 'optimizationMode': 'accuracy', 'focusMode': 'webSearch', 'history': [], 'systemInstructions': '', 'stream': False } response = perplexica_mcp_tool.perplexica_search( query=payload['query'], chat_model=payload['chatModel'], embedding_model=payload['embeddingModel'], optimization_mode=payload['optimizationMode'], focus_mode=payload['focusMode'], history=payload['history'], system_instructions=payload['systemInstructions'], stream=payload['stream'] ) print(f'Status code: {response.status_code}') print(f'Response: {response.json()}') ======= def test_search_perplexica(): payload = { 'query': 'What is Perplexica?', 'chatModel': { 'provider': 'openai', 'name': 'gpt-4o-mini' }, 'embeddingModel': { 'provider': 'openai', 'name': 'text-embedding-3-large' }, 'optimizationMode': 'balanced', 'focusMode': 'webSearch', 'history': [ ['human', 'What is AI?'], ['assistant', 'AI stands for Artificial Intelligence...'] ], 'systemInstructions': 'Provide concise answers', 'stream': False } try: response = perplexica_mcp_tool.perplexica_search( query=payload['query'], chat_model=payload['chatModel'], embedding_model=payload['embeddingModel'], optimization_mode=payload['optimizationMode'], focus_mode=payload['focusMode'], history=payload['history'], system_instructions=payload['systemInstructions'], stream=payload['stream'] ) print(f'Response: {response}') except Exception as e: print(f'Error: {str(e)}') >>>>>>> REPLACE </diff> </apply_diff> --- **User:** [apply_diff for 'test_search.py'] Result: Changes successfully applied to test_search.py: <environment_details> # VSCode Visible Files test_search.py # VSCode Open Tabs perplexica_mcp_tool.py,test_search.py # Current Time 30.5.2025, 10:45:20 PM (Europe/Berlin, UTC+2:00) # Current Context Size (Tokens) 22.290 (14%) # Current Cost $0.00 # Current Mode <slug>code</slug> <name>💻 Code</name> <model>deepseek/deepseek-chat-v3-0324:free</model> </environment_details> --- **Assistant:** I'll now update the main function in the MCP server to match the new API specification and ensure it handles the parameters correctly. <apply_diff> <path>perplexica_mcp_tool.py</path> <diff> <<<<<<< SEPARATOR :start_line:1 ------- #/usr/bin/env python3 # /// script # dependencies = [ # "requests", # "mcp" # ] # /// ======= #!/usr/bin/env python3 # /// script # dependencies = [ # "requests", # "mcp" # ] # /// >>>>>>> REPLACE <<<<<<< SEPARATOR :start_line:9 ------- import requests from mcp.server.fastmcp import FastMCP ======= import requests from mcp.server.fastmcp import FastMCP from typing import Optional, Dict, List, Union >>>>>>> REPLACE <<<<<<< SEPARATOR :start_line:13 ------- # Create an MCP server mcp = FastMCP("Pervision") ======= # Create an MCP server mcp = FastMCP("Perplexica") >>>>>>> REPLACE <<<<<<< SEPARator :start_line:17 ------- # Add the search tool @mcp.tool() def search(query: str, chat_model: dict = None, embedding_model: dict = None, optimization_mode: str = None, focus_mode: str = None, history: list = None, system_instructions: str = None, stream: bool = False ) -> dict: ======= @mcp.tool() def search( query: str, chat_model: Optional[Dict] = None, embedding_model: Optional[Dict] = None, optimization_mode: Optional[str] = None, focus_mode: str, history: Optional[List[Union[str, str]]] = None, system_instructions: Optional[str] = None, stream: bool = False ) -> Dict: >>>>>>> REPLACE <<<<<<< SEPARATOR :start_line:30 ------- """ Search using Perplexica's AI-powered search engine Args: query (str): The search query or question chat_model (dict, optional): Chat model configuration embedding_model (dict, optional): Embedding model configuration optimization_mode (str, optional): Optimization mode focus_mode (str, required): Focus mode, one of ['webSearch', 'academicSearch', 'writingAssistant', 'wolframAlphaSearch', 'youtubeSearch', 'redditSearch'] history (list, optional): Conversation history system_instructions (str, optional): System instructions stream (bool, optional): Whether to stream response Returns: [Response interrupted by user]

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thetom42/perplexica-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server