Skip to main content
Glama

Dual MCP Server for IoT & Memory

by jordy33
.env.example675 B
# IoT MCP Server configuration MQTT_BROKER=localhost MQTT_PORT=1883 HOST=0.0.0.0 PORT=8090 TRANSPORT=sse # Memory MCP Server configuration HOST=0.0.0.0 PORT=8050 TRANSPORT=sse # LLM Configuration for Memory Server LLM_PROVIDER=openai # Options: openai, openrouter, ollama LLM_API_KEY=your_api_key_here LLM_CHOICE=gpt-4 # Model name (gpt-4, llama3, etc.) LLM_BASE_URL=http://localhost:11434 # For Ollama # Embedding Model Configuration EMBEDDING_MODEL_CHOICE=text-embedding-3-small # For OpenAI # EMBEDDING_MODEL_CHOICE=nomic-embed-text # For Ollama # Vector Database Configuration DATABASE_URL=postgresql://user:password@localhost:5432/vector_db

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jordy33/iot_mcp_server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server