Skip to main content
Glama
thetom42

Perplexica MCP Server

search

Look up information across web, academic sources, YouTube, Reddit, and more with AI-powered focus modes and configurable settings.

Instructions

Search using Perplexica's AI-powered search engine.

This tool provides access to Perplexica's search capabilities with various focus modes for different types of searches including web search, academic search, writing assistance, and specialized searches for platforms like YouTube and Reddit.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query
focus_modeYesFocus mode: webSearch, academicSearch, writingAssistant, wolframAlphaSearch, youtubeSearch, redditSearch
chat_modelNoChat model configuration
embedding_modelNoEmbedding model configuration
optimization_modeNoOptimization mode: speed or balanced
historyNoConversation history
system_instructionsNoCustom system instructions
streamNoWhether to stream responses

Implementation Reference

  • The 'search' tool handler function, decorated with @mcp.tool(). It accepts query, sources, chat_model, embedding_model, optimization_mode, history, system_instructions, and stream parameters, validates required models, and delegates to the underlying perplexica_search() helper.
    @mcp.tool()
    async def search(
        query: Annotated[str, Field(description="Search query")],
        sources: Annotated[
            list,
            Field(
                description="Search sources array. Valid values: 'web' (general web search), 'academic' (scholarly articles), 'discussions' (forums like Reddit)"
            ),
        ],
        chat_model: Annotated[
            Optional[dict], Field(description="Chat model configuration")
        ] = DEFAULT_CHAT_MODEL,
        embedding_model: Annotated[
            Optional[dict], Field(description="Embedding model configuration")
        ] = DEFAULT_EMBEDDING_MODEL,
        optimization_mode: Annotated[
            Optional[str], Field(description="Optimization mode: speed, balanced, or quality")
        ] = None,
        history: Annotated[
            Optional[list], Field(description="Conversation history as [[role, text], ...] pairs")
        ] = None,
        system_instructions: Annotated[
            Optional[str], Field(description="Custom system instructions")
        ] = None,
        stream: Annotated[bool, Field(description="Whether to stream responses")] = False,
    ) -> dict:
        """
        Search using Perplexica's AI-powered search engine.
    
        This tool provides access to Perplexica's search capabilities with multiple source types
        that can be combined: web search, academic search, and discussions (forums).
        """
        # Fail fast if required models are absent
        if (chat_model or DEFAULT_CHAT_MODEL) is None or (
            embedding_model or DEFAULT_EMBEDDING_MODEL
        ) is None:
            return {
                "error": "Both chatModel and embeddingModel are required. Configure PERPLEXICA_* model env vars or pass them in the request."
            }
    
        return await perplexica_search(
            query=query,
            sources=sources,
            chat_model=chat_model,
            embedding_model=embedding_model,
            optimization_mode=optimization_mode,
            history=history,
            system_instructions=system_instructions,
            stream=stream,
        )
  • The schema/type definitions for the 'search' tool via FastMCP's decorator. Parameters use Annotated types with Pydantic Field descriptions, defining the input contract (query: str, sources: list, optional models, optimization_mode, history, stream).
    @mcp.tool()
    async def search(
        query: Annotated[str, Field(description="Search query")],
        sources: Annotated[
            list,
            Field(
                description="Search sources array. Valid values: 'web' (general web search), 'academic' (scholarly articles), 'discussions' (forums like Reddit)"
            ),
        ],
        chat_model: Annotated[
            Optional[dict], Field(description="Chat model configuration")
        ] = DEFAULT_CHAT_MODEL,
        embedding_model: Annotated[
            Optional[dict], Field(description="Embedding model configuration")
        ] = DEFAULT_EMBEDDING_MODEL,
        optimization_mode: Annotated[
            Optional[str], Field(description="Optimization mode: speed, balanced, or quality")
        ] = None,
        history: Annotated[
            Optional[list], Field(description="Conversation history as [[role, text], ...] pairs")
        ] = None,
        system_instructions: Annotated[
            Optional[str], Field(description="Custom system instructions")
        ] = None,
        stream: Annotated[bool, Field(description="Whether to stream responses")] = False,
    ) -> dict:
        """
  • Tool registration via the @mcp.tool() decorator on line 211, which registers the 'search' function as a tool with the FastMCP server instance named 'Perplexica' (line 42).
    @mcp.tool()
  • The perplexica_search() helper function that implements the actual search API call. It builds the payload with optional model specs, normalizes model configurations via _normalize_model_spec, and sends a POST request to PERPLEXICA_BACKEND_URL using httpx.
    async def perplexica_search(
        query,
        sources,
        chat_model=None,
        embedding_model=None,
        optimization_mode=None,
        history=None,
        system_instructions=None,
        stream=False,
    ) -> dict:
        """
        Search using the Perplexica API
    
        Args:
            query (str): The search query
            sources (list): Search sources - list containing: "web", "academic", "discussions"
            chat_model (dict, optional): Chat model configuration with:
                provider: Provider name (e.g., openai, ollama)
                name: Model name (e.g., gpt-4o-mini)
            embedding_model (dict, optional): Embedding model configuration with:
                provider: Provider name (e.g., openai)
                name: Model name (e.g., text-embedding-3-small)
            optimization_mode (str, optional): Optimization mode (speed, balanced, quality)
            history (list, optional): Conversation history as [["human", "text"], ["assistant", "text"]] pairs
            system_instructions (str, optional): Custom system instructions
            stream (bool, optional): Whether to stream responses
    
        Returns:
            dict: Search results from Perplexica
        """
    
        # Prepare the request payload
        payload = {"query": query, "sources": sources}
    
        # Add optional parameters if provided
        if chat_model:
            payload["chatModel"] = chat_model
        if embedding_model:
            payload["embeddingModel"] = embedding_model
        if optimization_mode:
            payload["optimizationMode"] = optimization_mode
        else:
            payload["optimizationMode"] = "balanced"
        if history is not None:
            payload["history"] = history
        else:
            payload["history"] = []
        if system_instructions:
            payload["systemInstructions"] = system_instructions
        if stream is not None:
            payload["stream"] = stream
    
        try:
            async with httpx.AsyncClient() as client:
                # Normalize model specifications to providerId/key format
                try:
                    if "chatModel" in payload and payload["chatModel"] is not None:
                        normalized_chat = await _normalize_model_spec(client, payload["chatModel"], is_embedding=False)
                        payload["chatModel"] = normalized_chat
                    if "embeddingModel" in payload and payload["embeddingModel"] is not None:
                        normalized_embed = await _normalize_model_spec(client, payload["embeddingModel"], is_embedding=True)
                        payload["embeddingModel"] = normalized_embed
                except ValueError as ve:
                    return {"error": f"Invalid model configuration: {str(ve)}"}
    
                response = await client.post(
                    PERPLEXICA_BACKEND_URL, json=payload, timeout=PERPLEXICA_READ_TIMEOUT
                )
                response.raise_for_status()
                return response.json()
        except httpx.HTTPError as e:
            return {"error": f"HTTP error occurred: {str(e)}"}
        except Exception as e:
            return {"error": f"An error occurred: {str(e)}"}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behaviors. It mentions the tool is AI-powered and has focus modes, but does not state whether it is read-only, what output format to expect, or any limitations/rate limits. This is insufficient for a search tool with multiple parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the main purpose. No redundant information. Efficient for a tool with moderate complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, no output schema, and no annotations, the description should explain return values, how to use focus modes effectively, and the role of optional parameters. It only skims surface-level details, leaving agents underinformed for advanced use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no meaningful context beyond listing focus mode examples, which is already in the parameter description. It does not enhance understanding of parameters like chat_model, embedding_model, or history.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs AI-powered search using Perplexica, and specifies various focus modes (web, academic, YouTube, Reddit, etc.). It is clear and specific, though no sibling tools exist to differentiate from.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for different search types via focus modes, but provides no explicit guidance on when to use or not use alternative tools (none exist) or when certain modes are appropriate. It lacks exclusions or usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thetom42/perplexica-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server