Skip to main content
Glama

McFlow

.env.example890 B
# McFlow MCP Server Configuration # Server mode: stdio (default) or http MCP_MODE=stdio # Port for HTTP mode (ignored in stdio mode) MCP_PORT=3000 # === n8n Integration === # n8n API Configuration (for cloud/self-hosted deployments) N8N_API_URL=https://your-n8n-instance.com N8N_API_KEY=your-api-key-here # Docker configuration (for local deployments) N8N_DOCKER_CONTAINER=n8n # Use n8n cloud API instead of local files USE_CLOUD=false # === Workflow Settings === # Default project for new workflows DEFAULT_PROJECT=my-project # Auto-validate workflows on create/update AUTO_VALIDATE=true # === AI Agent Settings === # Enable enhanced context for AI agents ENHANCED_CONTEXT=true # Maximum workflow size (in KB) for analysis MAX_WORKFLOW_SIZE=500 # === Logging === # Log level: debug, info, warn, error LOG_LEVEL=info # Log file path (optional, defaults to console only) LOG_FILE=

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mckinleymedia/mcflow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server