Skip to main content
Glama
AI-enthusiasts

Crawl4AI+SearXNG MCP Server

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
USE_QDRANTNoUse Qdrant instead of Supabase (optional)false
SEARXNG_URLNoInternal SearXNG URLhttp://searxng:8080
CRAWL4AI_URLNoInternal Crawl4AI URLhttp://crawl4ai:8000
LLM_PROVIDERNoThe LLM provider to use (e.g., openai, anthropic, groq)openai
USE_SUPABASENoEnable Supabase for vector storagetrue
USE_RERANKINGNoCross-encoder Reranking (improves result relevance)false
OPENAI_API_KEYNoRequired for contextual embeddings
RERANKING_MODELNoThe model to use for rerankingcross-encoder/ms-marco-MiniLM-L-12-v2
USE_AGENTIC_RAGNoEnable advanced RAG featurestrue
NEO4J_BATCH_SIZENoBatch size for large repository processing50
REPO_MAX_SIZE_MBNoMaximum repository size500
USE_HYBRID_SEARCHNoHybrid Search (combines vector + keyword search). Requires PostgreSQL full-text searchfalse
NEO4J_BATCH_TIMEOUTNoTimeout for batch operations120
REPO_MAX_FILE_COUNTNoMaximum number of files10000
USE_KNOWLEDGE_GRAPHNoEnable Neo4j for code analysistrue
CRAWL4AI_MAX_CONCURRENTNoMaximum concurrent crawling sessions20
SUPABASE_MAX_CONNECTIONSNoMaximum connections for Supabase20
USE_CONTEXTUAL_EMBEDDINGSNoContextual Embeddings (improves search accuracy by 20-30%). Requires OpenAI API or compatible LLMfalse

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AI-enthusiasts/crawl4ai-rag-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server