Skip to main content
Glama

MCP RAG Server

.env.example849 B
# MCP Server Environment Variables # GitHub Personal Access Token (required for higher rate limits) # Create one at https://github.com/settings/tokens # Needs at least 'public_repo' and 'read:packages' scopes GITHUB_TOKEN= # Server configuration PORT=8000 HOST=0.0.0.0 # Vector database settings INDEX_FILE=data/faiss_index.bin EMBEDDING_MODEL=all-MiniLM-L6-v2 # LLM API settings for RAG integration OPENAI_API_KEY= LLM_API_URL=https://api.openai.com/v1/chat/completions LLM_MODEL=gpt-4.5 # Alternatively, you can use other API providers by changing the URL and model # For example, for Azure OpenAI: # LLM_API_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/chat/completions?api-version=2023-05-15 # Or for Anthropic Claude: # LLM_API_URL=https://api.anthropic.com/v1/messages # LLM_MODEL=claude-3-sonnet-20240229

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ProbonoBonobo/sui-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server