search
Look up information across web, academic sources, YouTube, Reddit, and more with AI-powered focus modes and configurable settings.
Instructions
Search using Perplexica's AI-powered search engine.
This tool provides access to Perplexica's search capabilities with various focus modes for different types of searches including web search, academic search, writing assistance, and specialized searches for platforms like YouTube and Reddit.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query | |
| focus_mode | Yes | Focus mode: webSearch, academicSearch, writingAssistant, wolframAlphaSearch, youtubeSearch, redditSearch | |
| chat_model | No | Chat model configuration | |
| embedding_model | No | Embedding model configuration | |
| optimization_mode | No | Optimization mode: speed or balanced | |
| history | No | Conversation history | |
| system_instructions | No | Custom system instructions | |
| stream | No | Whether to stream responses |
Implementation Reference
- src/perplexica_mcp/server.py:211-260 (handler)The 'search' tool handler function, decorated with @mcp.tool(). It accepts query, sources, chat_model, embedding_model, optimization_mode, history, system_instructions, and stream parameters, validates required models, and delegates to the underlying perplexica_search() helper.
@mcp.tool() async def search( query: Annotated[str, Field(description="Search query")], sources: Annotated[ list, Field( description="Search sources array. Valid values: 'web' (general web search), 'academic' (scholarly articles), 'discussions' (forums like Reddit)" ), ], chat_model: Annotated[ Optional[dict], Field(description="Chat model configuration") ] = DEFAULT_CHAT_MODEL, embedding_model: Annotated[ Optional[dict], Field(description="Embedding model configuration") ] = DEFAULT_EMBEDDING_MODEL, optimization_mode: Annotated[ Optional[str], Field(description="Optimization mode: speed, balanced, or quality") ] = None, history: Annotated[ Optional[list], Field(description="Conversation history as [[role, text], ...] pairs") ] = None, system_instructions: Annotated[ Optional[str], Field(description="Custom system instructions") ] = None, stream: Annotated[bool, Field(description="Whether to stream responses")] = False, ) -> dict: """ Search using Perplexica's AI-powered search engine. This tool provides access to Perplexica's search capabilities with multiple source types that can be combined: web search, academic search, and discussions (forums). """ # Fail fast if required models are absent if (chat_model or DEFAULT_CHAT_MODEL) is None or ( embedding_model or DEFAULT_EMBEDDING_MODEL ) is None: return { "error": "Both chatModel and embeddingModel are required. Configure PERPLEXICA_* model env vars or pass them in the request." } return await perplexica_search( query=query, sources=sources, chat_model=chat_model, embedding_model=embedding_model, optimization_mode=optimization_mode, history=history, system_instructions=system_instructions, stream=stream, ) - src/perplexica_mcp/server.py:211-237 (schema)The schema/type definitions for the 'search' tool via FastMCP's decorator. Parameters use Annotated types with Pydantic Field descriptions, defining the input contract (query: str, sources: list, optional models, optimization_mode, history, stream).
@mcp.tool() async def search( query: Annotated[str, Field(description="Search query")], sources: Annotated[ list, Field( description="Search sources array. Valid values: 'web' (general web search), 'academic' (scholarly articles), 'discussions' (forums like Reddit)" ), ], chat_model: Annotated[ Optional[dict], Field(description="Chat model configuration") ] = DEFAULT_CHAT_MODEL, embedding_model: Annotated[ Optional[dict], Field(description="Embedding model configuration") ] = DEFAULT_EMBEDDING_MODEL, optimization_mode: Annotated[ Optional[str], Field(description="Optimization mode: speed, balanced, or quality") ] = None, history: Annotated[ Optional[list], Field(description="Conversation history as [[role, text], ...] pairs") ] = None, system_instructions: Annotated[ Optional[str], Field(description="Custom system instructions") ] = None, stream: Annotated[bool, Field(description="Whether to stream responses")] = False, ) -> dict: """ - src/perplexica_mcp/server.py:211-211 (registration)Tool registration via the @mcp.tool() decorator on line 211, which registers the 'search' function as a tool with the FastMCP server instance named 'Perplexica' (line 42).
@mcp.tool() - src/perplexica_mcp/server.py:135-208 (helper)The perplexica_search() helper function that implements the actual search API call. It builds the payload with optional model specs, normalizes model configurations via _normalize_model_spec, and sends a POST request to PERPLEXICA_BACKEND_URL using httpx.
async def perplexica_search( query, sources, chat_model=None, embedding_model=None, optimization_mode=None, history=None, system_instructions=None, stream=False, ) -> dict: """ Search using the Perplexica API Args: query (str): The search query sources (list): Search sources - list containing: "web", "academic", "discussions" chat_model (dict, optional): Chat model configuration with: provider: Provider name (e.g., openai, ollama) name: Model name (e.g., gpt-4o-mini) embedding_model (dict, optional): Embedding model configuration with: provider: Provider name (e.g., openai) name: Model name (e.g., text-embedding-3-small) optimization_mode (str, optional): Optimization mode (speed, balanced, quality) history (list, optional): Conversation history as [["human", "text"], ["assistant", "text"]] pairs system_instructions (str, optional): Custom system instructions stream (bool, optional): Whether to stream responses Returns: dict: Search results from Perplexica """ # Prepare the request payload payload = {"query": query, "sources": sources} # Add optional parameters if provided if chat_model: payload["chatModel"] = chat_model if embedding_model: payload["embeddingModel"] = embedding_model if optimization_mode: payload["optimizationMode"] = optimization_mode else: payload["optimizationMode"] = "balanced" if history is not None: payload["history"] = history else: payload["history"] = [] if system_instructions: payload["systemInstructions"] = system_instructions if stream is not None: payload["stream"] = stream try: async with httpx.AsyncClient() as client: # Normalize model specifications to providerId/key format try: if "chatModel" in payload and payload["chatModel"] is not None: normalized_chat = await _normalize_model_spec(client, payload["chatModel"], is_embedding=False) payload["chatModel"] = normalized_chat if "embeddingModel" in payload and payload["embeddingModel"] is not None: normalized_embed = await _normalize_model_spec(client, payload["embeddingModel"], is_embedding=True) payload["embeddingModel"] = normalized_embed except ValueError as ve: return {"error": f"Invalid model configuration: {str(ve)}"} response = await client.post( PERPLEXICA_BACKEND_URL, json=payload, timeout=PERPLEXICA_READ_TIMEOUT ) response.raise_for_status() return response.json() except httpx.HTTPError as e: return {"error": f"HTTP error occurred: {str(e)}"} except Exception as e: return {"error": f"An error occurred: {str(e)}"}