Skip to main content
Glama

Smart-AI-Bridge

.env.exampleโ€ข3.2 kB
# Smart AI Bridge - Environment Configuration Template # Copy this file to .env and fill in your actual values # =================================== # REQUIRED: Cloud AI API Keys # =================================== # NVIDIA Cloud API Key (Recommended - Get from: https://build.nvidia.com/) # Provides access to DeepSeek V3.1 and Qwen 3 Coder 480B NVIDIA_API_KEY=nvapi-YOUR-KEY-HERE # Google Gemini API Key (Optional - Get from: https://makersuite.google.com/app/apikey) # Provides 2M token context window GEMINI_API_KEY=YOUR-GEMINI-KEY-HERE # =================================== # SERVER CONFIGURATION # =================================== # Environment mode (production, development) NODE_ENV=production # Enable MCP server mode MCP_SERVER_MODE=true # Server port (default: 3000) PORT=3000 # =================================== # OPTIONAL: Local Model Configuration # =================================== # Local model endpoint (if running local LLM) # DEEPSEEK_ENDPOINT=http://localhost:8001/v1 # Local model API key (optional security) # LOCAL_MODEL_API_KEY=your-local-key # =================================== # OPTIONAL: Additional Cloud Providers # =================================== # DeepSeek Official API (if using direct DeepSeek cloud) # DEEPSEEK_API_KEY=your-deepseek-key # DEEPSEEK_ENDPOINT=https://api.deepseek.com/v1/chat/completions # Qwen Cloud API (Alibaba DashScope) # QWEN_CLOUD_API_KEY=your-qwen-key # QWEN_CLOUD_ENDPOINT=https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation # OpenAI API (fallback provider) # OPENAI_API_KEY=your-openai-key # =================================== # ADVANCED CONFIGURATION # =================================== # Enable validation system VALIDATION_ENABLED=true # Enable fuzzy matching for file operations FUZZY_MATCHING_ENABLED=true # Fuzzy matching threshold (0.1-1.0, higher = more strict) FUZZY_THRESHOLD=0.8 # Circuit breaker configuration CIRCUIT_BREAKER_THRESHOLD=5 CIRCUIT_BREAKER_TIMEOUT=60000 # Rate limiting (requests per minute) RATE_LIMIT_RPM=100 # Logging level (debug, info, warn, error) LOG_LEVEL=info # =================================== # AUTHENTICATION & SECURITY # =================================== # MCP Authentication Token (REQUIRED for production) # Generate with: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))" MCP_AUTH_TOKEN=your-secure-token-here # Tool-level permissions (comma-separated tool names, or * for all) # Example: MCP_ALLOWED_TOOLS=read,review,write,edit # MCP_ALLOWED_TOOLS=* # Rate limiting configuration # RATE_LIMIT_PER_MINUTE=60 # RATE_LIMIT_PER_HOUR=500 # RATE_LIMIT_PER_DAY=5000 # Payload size limits (in bytes) # MAX_REQUEST_SIZE=10485760 # 10MB # MAX_FILE_SIZE=5242880 # 5MB # MAX_BATCH_SIZE=50 # Max files per batch operation # =================================== # IMPORTANT SECURITY NOTES # =================================== # Never commit this file with real values to version control! # Add .env to .gitignore # Set restrictive permissions: chmod 600 .env # Always enable MCP_AUTH_TOKEN in production environments # Development mode (no token) allows all operations - use only locally

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Platano78/Smart-AI-Bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server