Skip to main content
Glama

Llama 4 Maverick MCP Server

by YobieBen
.env.exampleโ€ข3.86 kB
# Llama 4 Maverick MCP Configuration # Author: Yobie Benjamin # Version: 0.9 # Date: August 1, 2025 # ============================================================================ # OLLAMA CONFIGURATION # ============================================================================ # URL where Ollama API is running LLAMA_API_URL=http://localhost:11434 # Model to use (examples: llama3:latest, codellama:latest, mistral:latest) LLAMA_MODEL_NAME=llama3:latest # Optional API key for authenticated Ollama instances LLAMA_API_KEY= # ============================================================================ # MCP SERVER CONFIGURATION # ============================================================================ # Server host and port (for future HTTP mode) MCP_SERVER_HOST=localhost MCP_SERVER_PORT=3000 # Logging level: DEBUG, INFO, WARNING, ERROR MCP_LOG_LEVEL=INFO # ============================================================================ # SECURITY CONFIGURATION # ============================================================================ # Enable authentication MCP_ENABLE_AUTH=false # API key for authentication (if enabled) MCP_API_KEY= # Allowed hosts (comma-separated, * for all) MCP_ALLOWED_HOSTS=* # ============================================================================ # FEATURE FLAGS # ============================================================================ # Enable streaming responses ENABLE_STREAMING=true # Enable function calling/tool use ENABLE_FUNCTION_CALLING=true # Enable vision capabilities (requires vision-capable model) ENABLE_VISION=false # Enable code execution (SECURITY RISK - use with caution) ENABLE_CODE_EXECUTION=false # Enable web search functionality ENABLE_WEB_SEARCH=true # ============================================================================ # MODEL GENERATION PARAMETERS # ============================================================================ # Temperature: Controls randomness (0.0 = deterministic, 2.0 = very random) TEMPERATURE=0.7 # Top-p: Nucleus sampling threshold (0.0-1.0) TOP_P=0.9 # Top-k: Number of top tokens to consider TOP_K=40 # Repeat penalty: Discourage repetition (1.0 = no penalty) REPEAT_PENALTY=1.1 # Random seed for reproducible outputs (optional) # SEED=42 # ============================================================================ # PERFORMANCE CONFIGURATION # ============================================================================ # Maximum context window size in tokens MAX_CONTEXT_LENGTH=128000 # Maximum concurrent requests MAX_CONCURRENT_REQUESTS=10 # Request timeout in milliseconds REQUEST_TIMEOUT_MS=30000 # Cache time-to-live in seconds CACHE_TTL=3600 # Maximum cache size (number of items) CACHE_MAX_SIZE=1000 # ============================================================================ # FILE SYSTEM CONFIGURATION # ============================================================================ # Base path for file operations (absolute or relative) FILE_SYSTEM_BASE_PATH=. # Allow file write operations ALLOW_FILE_WRITES=true # ============================================================================ # DATABASE CONFIGURATION (Optional) # ============================================================================ # Database connection URL DATABASE_URL= # Database connection pool size DATABASE_POOL_SIZE=10 # ============================================================================ # DEBUG CONFIGURATION # ============================================================================ # Enable debug mode DEBUG=false # Enable verbose logging VERBOSE_LOGGING=false # ============================================================================ # ENVIRONMENT # ============================================================================ # Environment: development, production, test ENV=development

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/YobieBen/llama4-maverick-mcp-python'

If you have feedback or need assistance with the MCP directory API, please join our Discord server