Skip to main content
Glama

Perplexica MCP Server

.sample.env657 B
# Perplexica Backend Configuration PERPLEXICA_BACKEND_URL=http://perplexica-app-1:3000/api/search # For host-local testing (not in Docker), use: # PERPLEXICA_BACKEND_URL=http://localhost:3000/api/search # Default Model Configuration (Required) # The Perplexica API requires both chatModel and embeddingModel to be specified. # These will be used as defaults when no model is explicitly provided in the search request. # Chat Model Configuration PERPLEXICA_CHAT_MODEL_PROVIDER=openai PERPLEXICA_CHAT_MODEL_NAME=gpt-oss-120b # Embedding Model Configuration PERPLEXICA_EMBEDDING_MODEL_PROVIDER=openai PERPLEXICA_EMBEDDING_MODEL_NAME=text-embedding-3-small

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thetom42/perplexica-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server