Skip to main content
Glama

Superglue MCP

Official
by superglue-ai
.env.example4.34 kB
# ============================================================================== # ENDPOINTS AND AUTHENTICATION # ============================================================================== # Port for the superglue graphql server GRAPHQL_PORT=3000 # Port to the superglue rest api (must be different than the graphql port) API_PORT=3002 # Endpoint for the graphql api (used so the web dashboard knows where to find the server) GRAPHQL_ENDPOINT=http://localhost:3000 # Endpoint for the rest api (not used at the moment) API_ENDPOINT=http://localhost:3002 # Port for the web dashboard WEB_PORT=3001 # Authentication token for API access - needed for server to start AUTH_TOKEN=your-secret-token # Controls whether the workflow scheduler should run alongside Superglue. # ⚠️ Important: Only enable this on a single instance. # Running multiple schedulers (e.g. in production or when using the same DB) # can cause conflicts. START_SCHEDULER_SERVER=false # ============================================================================== # DATASTORE # ============================================================================== # Datastore type (redis or memory, file or postgres) DATASTORE_TYPE=postgres # If file: the path to the datastore directory # If not given or existing, the datastore will be created in the current directory STORAGE_DIR=/data # If postgres: Database connection settings POSTGRES_HOST=localhost # 'postgres' when self hosted with docker-compose POSTGRES_PORT=5432 POSTGRES_USERNAME=superglue POSTGRES_PASSWORD=your-secure-password POSTGRES_DB=superglue # when using a unsecured postgres db that does not support ssl, uncomment this: # POSTGRES_SSL=false # ============================================================================== # LLM PROVIDERS # ============================================================================== # AI Gateway by Vercel API Key and provider/Model. Check out all Models at https://vercel.com/ai-gateway/models?type=chat # Leave those variables empty if you dont want to use the Gateway # AI_GATEWAY_MODEL=anthropic/claude-sonnet-4.5 # for example openai/gpt-5 # AI_GATEWAY_API_KEY=vck_XXXX # AI Provider - openai, azure, gemini or anthropic # best performance / price ratio right now is OpenAI with gpt-4.1 LLM_PROVIDER=anthropic # If gemini: Your Google API key # You can get one here: https://aistudio.google.com/app/apikey GEMINI_API_KEY=XXXXXXX # Gemini model to use. We recommend gemini-2.5-flash GEMINI_MODEL=gemini-2.5-flash # If openai: Your OpenAI API key # You can get one here: https://platform.openai.com/api-keys OPENAI_API_KEY=sk-proj-XXXXXXXX # OpenAI model to use. Use gpt-4.1 for best results. OPENAI_MODEL=gpt-4.1 # Optional: Set a custom OpenAI API URL (for self-hosted models or providers like fireworks.ai) # For fireworks, use https://api.fireworks.ai/inference/v1 OPENAI_BASE_URL=https://api.openai.com/v1 # If anthropic: Your API KEY # You can get one here: https://docs.anthropic.com/en/api/admin-api/apikeys/get-api-key ANTHROPIC_API_KEY=sk-ant-XXXXXXX # Anthropic model to use ANTHROPIC_MODEL=claude-sonnet-4-20250514 # If azure: # You can find more information here: https://ai-sdk.dev/providers/ai-sdk-providers/azure AZURE_MODEL=<your_resource_name> AZURE_RESOURCE_NAME=<your_resource_name> # either provide this or AZURE_BASE_URL which includes the resource name AZURE_API_KEY=<your_api_key> AZURE_BASE_URL=<your_base_url> # for example: https://{resource_name}.openai.azure.com/openai AZURE_API_VERSION=<your api version> # for example: 2025-01-01-preview AZURE_USE_DEPLOYMENT_BASED_URLS=false # if you want to use legacy url building # ============================================================================== # MISC # ============================================================================== # Disable the welcome/onboarding screen for development NEXT_PUBLIC_DISABLE_WELCOME_SCREEN=false # Determines whether to set keep alive in axios https agents. Faster if true, but more brittle and prone to connection reuse issues. Treated as true if env var is not set AXIOS_KEEP_ALIVE=false # Encryption settings # Optional: Master key for encrypting stored credentials # If not set, credentials will be stored in plaintext # Generate a strong key: openssl rand -hex 32 MASTER_ENCRYPTION_KEY=your-32-byte-encryption-key

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/superglue-ai/superglue'

If you have feedback or need assistance with the MCP directory API, please join our Discord server