# Deployment Guide
This guide covers deploying the Bi-Temporal Knowledge Graph MCP Server to various platforms.
## Table of Contents
1. [Replit Autoscale](#replit-autoscale)
2. [Docker Compose](#docker-compose)
3. [Railway](#railway)
4. [Render](#render)
5. [Fly.io](#flyio)
6. [Manual/VPS](#manual-vps)
---
## Replit Autoscale
Replit Autoscale is ideal for this project as it provides automatic scaling and managed infrastructure.
### Steps:
1. **Create New Replit Project**
- Go to replit.com
- Click "Create Repl"
- Choose "Import from GitHub" or upload files
2. **Configure Secrets**
Go to Tools → Secrets and add:
```
FALKORDB_HOST=your-falkordb-host
FALKORDB_PORT=6379
FALKORDB_PASSWORD=your-password
OPENAI_API_KEY=sk-your-key
POSTGRES_HOST=your-postgres-host
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your-password
POSTGRES_DB=automation_db
```
3. **Create .replit File**
```toml
[deployment]
run = ["python", "main.py"]
deploymentTarget = "autoscale"
[[ports]]
localPort = 8080
externalPort = 80
```
4. **Deploy**
- Click "Deploy"
- Choose "Autoscale"
- Your MCP endpoint will be: `https://your-app.repl.co/sse`
### Replit Database Options:
- **FalkorDB**: Use Replit's Redis and install FalkorDB module, or use external FalkorDB Cloud
- **PostgreSQL**: Use Replit's PostgreSQL add-on or external provider
---
## Docker Compose
Perfect for local development or self-hosted deployments.
### Steps:
1. **Clone Repository**
```bash
git clone <your-repo>
cd bitemporal-mcp-server
```
2. **Configure Environment**
```bash
cp .env.example .env
# Edit .env with your settings
```
3. **Start All Services**
```bash
docker-compose up -d
```
4. **Seed Database (Optional)**
```bash
docker-compose exec mcp-server python seed_db.py
```
5. **Access Server**
- MCP endpoint: `http://localhost:8080/sse`
- FalkorDB: `localhost:6379`
- PostgreSQL: `localhost:5432`
### Production Configuration:
```yaml
# docker-compose.prod.yml
services:
mcp-server:
restart: always
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
```
---
## Railway
Railway provides easy deployment with automatic HTTPS and scaling.
### Steps:
1. **Install Railway CLI**
```bash
npm install -g @railway/cli
railway login
```
2. **Create New Project**
```bash
railway init
```
3. **Add Services**
```bash
# Add FalkorDB/Redis
railway add -s redis
# Add PostgreSQL
railway add -s postgresql
```
4. **Set Environment Variables**
```bash
railway variables set OPENAI_API_KEY=sk-your-key
railway variables set FALKORDB_HOST=${{REDIS_HOST}}
railway variables set POSTGRES_HOST=${{POSTGRES_HOST}}
```
5. **Deploy**
```bash
railway up
```
6. **Get URL**
```bash
railway domain
```
### Railway Configuration File:
Create `railway.json`:
```json
{
"$schema": "https://railway.app/railway.schema.json",
"build": {
"builder": "NIXPACKS"
},
"deploy": {
"startCommand": "python main.py",
"restartPolicyType": "ON_FAILURE",
"restartPolicyMaxRetries": 10
}
}
```
---
## Render
Render offers free tier with good performance.
### Steps:
1. **Connect Repository**
- Go to render.com
- Click "New +" → "Web Service"
- Connect your GitHub repo
2. **Configure Service**
- Name: `bitemporal-mcp-server`
- Environment: `Python 3`
- Build Command: `pip install -r requirements.txt`
- Start Command: `python main.py`
3. **Add PostgreSQL**
- Click "New +" → "PostgreSQL"
- Note the connection details
4. **Add Redis**
- Use external Redis/FalkorDB provider
- Or use Render's Redis (requires paid plan)
5. **Set Environment Variables**
```
FALKORDB_HOST=your-redis-host
OPENAI_API_KEY=sk-your-key
POSTGRES_HOST=${{POSTGRES_HOST}}
POSTGRES_USER=${{POSTGRES_USER}}
POSTGRES_PASSWORD=${{POSTGRES_PASSWORD}}
```
6. **Deploy**
- Click "Create Web Service"
- Your endpoint: `https://your-app.onrender.com/sse`
### Render Blueprint (render.yaml):
```yaml
services:
- type: web
name: bitemporal-mcp-server
env: python
buildCommand: pip install -r requirements.txt
startCommand: python main.py
envVars:
- key: PORT
value: 8080
- key: OPENAI_API_KEY
sync: false
- key: FALKORDB_HOST
sync: false
- key: POSTGRES_HOST
fromDatabase:
name: postgres
property: host
databases:
- name: postgres
databaseName: automation_db
user: postgres
```
---
## Fly.io
Fly.io offers global deployment with edge computing.
### Steps:
1. **Install Flyctl**
```bash
curl -L https://fly.io/install.sh | sh
flyctl auth login
```
2. **Initialize App**
```bash
flyctl launch
```
3. **Configure fly.toml**
```toml
app = "bitemporal-mcp-server"
primary_region = "iad"
[build]
dockerfile = "Dockerfile"
[env]
PORT = "8080"
[[services]]
internal_port = 8080
protocol = "tcp"
[[services.ports]]
port = 80
handlers = ["http"]
[[services.ports]]
port = 443
handlers = ["tls", "http"]
```
4. **Create PostgreSQL**
```bash
flyctl postgres create
flyctl postgres attach <postgres-app-name>
```
5. **Create Redis/FalkorDB**
```bash
flyctl redis create
```
6. **Set Secrets**
```bash
flyctl secrets set OPENAI_API_KEY=sk-your-key
flyctl secrets set FALKORDB_HOST=<redis-host>
```
7. **Deploy**
```bash
flyctl deploy
```
---
## Manual/VPS
For full control on your own server.
### Requirements:
- Ubuntu 22.04 or similar
- Python 3.9+
- Redis/FalkorDB
- PostgreSQL
- Nginx (recommended)
### Steps:
1. **Install Dependencies**
```bash
sudo apt update
sudo apt install -y python3 python3-pip python3-venv
sudo apt install -y redis postgresql nginx
```
2. **Clone and Setup**
```bash
git clone <your-repo>
cd bitemporal-mcp-server
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```
3. **Configure Services**
```bash
# Configure PostgreSQL
sudo -u postgres createdb automation_db
# Install FalkorDB module for Redis
# Download from https://github.com/FalkorDB/FalkorDB
```
4. **Setup Systemd Service**
Create `/etc/systemd/system/mcp-server.service`:
```ini
[Unit]
Description=Bi-Temporal MCP Server
After=network.target redis.service postgresql.service
[Service]
Type=simple
User=www-data
WorkingDirectory=/path/to/bitemporal-mcp-server
Environment="PATH=/path/to/venv/bin"
EnvironmentFile=/path/to/.env
ExecStart=/path/to/venv/bin/python main.py
Restart=always
[Install]
WantedBy=multi-user.target
```
5. **Configure Nginx**
Create `/etc/nginx/sites-available/mcp-server`:
```nginx
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
```
6. **Start Services**
```bash
sudo systemctl enable mcp-server
sudo systemctl start mcp-server
sudo systemctl enable nginx
sudo systemctl restart nginx
```
7. **Setup SSL (Optional)**
```bash
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d your-domain.com
```
---
## Production Checklist
Before deploying to production:
- [ ] Change all default passwords
- [ ] Set up SSL/TLS certificates
- [ ] Configure firewall rules
- [ ] Set up monitoring and logging
- [ ] Configure backup strategy for databases
- [ ] Set resource limits (memory, CPU)
- [ ] Test failover scenarios
- [ ] Document your deployment
- [ ] Set up alerts for errors
- [ ] Configure rate limiting
## Monitoring
Recommended monitoring setup:
```python
# Add to main.py for health checks
@mcp.tool()
async def health_check():
"""Simple health check endpoint."""
return {
"status": "healthy",
"timestamp": datetime.utcnow().isoformat(),
"database": FalkorDBDriver.is_connected()
}
```
## Scaling Considerations
- **Horizontal Scaling**: Use `group_id` to partition data
- **Database Scaling**: Consider FalkorDB cluster for large deployments
- **Session Store**: Can be moved to Redis for distributed deployments
- **Load Balancing**: Use nginx or cloud load balancers
---
Need help? Open an issue on GitHub!