Skip to main content
Glama

search

Perform AI-powered searches across web, academic sources, writing assistance, and platforms like YouTube and Reddit using Perplexica's search engine with customizable focus modes.

Instructions

Search using Perplexica's AI-powered search engine.

This tool provides access to Perplexica's search capabilities with various focus modes for different types of searches including web search, academic search, writing assistance, and specialized searches for platforms like YouTube and Reddit.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query
focus_modeYesFocus mode: webSearch, academicSearch, writingAssistant, wolframAlphaSearch, youtubeSearch, redditSearch
chat_modelNoChat model configuration
embedding_modelNoEmbedding model configuration
optimization_modeNoOptimization mode: speed or balanced
historyNoConversation history
system_instructionsNoCustom system instructions
streamNoWhether to stream responses

Implementation Reference

  • The @mcp.tool()-decorated handler function for the 'search' MCP tool. Defines the tool logic, input schema via Annotated Pydantic Fields, validation for models, and delegates to the perplexica_search helper for API interaction.
    @mcp.tool() async def search( query: Annotated[str, Field(description="Search query")], focus_mode: Annotated[ str, Field( description="Focus mode: webSearch, academicSearch, writingAssistant, wolframAlphaSearch, youtubeSearch, redditSearch" ), ], chat_model: Annotated[ Optional[dict], Field(description="Chat model configuration") ] = DEFAULT_CHAT_MODEL, embedding_model: Annotated[ Optional[dict], Field(description="Embedding model configuration") ] = DEFAULT_EMBEDDING_MODEL, optimization_mode: Annotated[ Optional[str], Field(description="Optimization mode: speed or balanced") ] = None, history: Annotated[ Optional[list], Field(description="Conversation history") ] = None, system_instructions: Annotated[ Optional[str], Field(description="Custom system instructions") ] = None, stream: Annotated[bool, Field(description="Whether to stream responses")] = False, ) -> dict: """ Search using Perplexica's AI-powered search engine. This tool provides access to Perplexica's search capabilities with various focus modes for different types of searches including web search, academic search, writing assistance, and specialized searches for platforms like YouTube and Reddit. """ # Fail fast if required models are absent if (chat_model or DEFAULT_CHAT_MODEL) is None or ( embedding_model or DEFAULT_EMBEDDING_MODEL ) is None: return { "error": "Both chatModel and embeddingModel are required. Configure PERPLEXICA_* model env vars or pass them in the request." } return await perplexica_search( query=query, focus_mode=focus_mode, chat_model=chat_model, embedding_model=embedding_model, optimization_mode=optimization_mode, history=history, system_instructions=system_instructions, stream=stream, )
  • Pydantic input schema for the search tool, using Annotated[str/dict/etc., Field(description=...)] for parameters like query, focus_mode, models, etc.
    query: Annotated[str, Field(description="Search query")], focus_mode: Annotated[ str, Field( description="Focus mode: webSearch, academicSearch, writingAssistant, wolframAlphaSearch, youtubeSearch, redditSearch" ), ], chat_model: Annotated[ Optional[dict], Field(description="Chat model configuration") ] = DEFAULT_CHAT_MODEL, embedding_model: Annotated[ Optional[dict], Field(description="Embedding model configuration") ] = DEFAULT_EMBEDDING_MODEL, optimization_mode: Annotated[ Optional[str], Field(description="Optimization mode: speed or balanced") ] = None, history: Annotated[ Optional[list], Field(description="Conversation history") ] = None, system_instructions: Annotated[ Optional[str], Field(description="Custom system instructions") ] = None, stream: Annotated[bool, Field(description="Whether to stream responses")] = False,
  • perplexica_search helper function that prepares the JSON payload with search parameters and performs asynchronous HTTP POST to the Perplexica backend URL, handling errors.
    async def perplexica_search( query, focus_mode, chat_model=None, embedding_model=None, optimization_mode=None, history=None, system_instructions=None, stream=False, ) -> dict: """ Search using the Perplexica API Args: query (str): The search query chat_model (dict, optional): Chat model configuration with: provider: Provider name (e.g., openai, ollama) name: Model name (e.g., gpt-4o-mini) customOpenAIBaseURL: Optional custom OpenAI base URL customOpenAIKey: Optional custom OpenAI API key embedding_model (dict, optional): Embedding model configuration with: provider: Provider name (e.g., openai) name: Model name (e.g., text-embedding-3-small) customOpenAIBaseURL: Optional custom OpenAI base URL customOpenAIKey: Optional custom OpenAI API key focus_mode (str): Search focus mode (webSearch, academicSearch, etc.) optimization_mode (str, optional): Optimization mode (speed, balanced) history (list, optional): Conversation history system_instructions (str, optional): Custom system instructions stream (bool, optional): Whether to stream responses Returns: dict: Search results from Perplexica """ # Prepare the request payload payload = {"query": query, "focusMode": focus_mode} # Add optional parameters if provided if chat_model: payload["chatModel"] = chat_model if embedding_model: payload["embeddingModel"] = embedding_model if optimization_mode: payload["optimizationMode"] = optimization_mode else: payload["optimizationMode"] = "balanced" if history is not None: payload["history"] = history else: payload["history"] = [] if system_instructions: payload["systemInstructions"] = system_instructions if stream is not None: payload["stream"] = stream try: async with httpx.AsyncClient() as client: response = await client.post( PERPLEXICA_BACKEND_URL, json=payload, timeout=PERPLEXICA_READ_TIMEOUT ) response.raise_for_status() return response.json() except httpx.HTTPError as e: return {"error": f"HTTP error occurred: {str(e)}"} except Exception as e: return {"error": f"An error occurred: {str(e)}"}
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thetom42/perplexica-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server