DEPLOYMENT.mdβ’24.7 kB
# Deploying MCP Timezone Server to Google Cloud Run
This guide walks you through deploying your MCP (Model Context Protocol) server to Google Cloud Run, making it accessible over HTTPS.
## π§ Environment Variables Setup
Before running any deployment commands, set up your environment variables:
```bash
# 1. Copy the example environment file
cp example.env .env
# 2. Edit .env and fill in your values:
# GCLOUD_PROJECT_ID=your-gcp-project-id
# GCLOUD_REGION=your-preferred-region (e.g., europe-west2, us-central1)
# 3. Load environment variables into your current shell
source scripts/load-env.sh
```
**Required Environment Variables:**
- `GCLOUD_PROJECT_ID` - Your Google Cloud Project ID
- `GCLOUD_REGION` - Your deployment region (e.g., `europe-west2`, `us-central1`, `us-east1`)
**Note:** You must run `source scripts/load-env.sh` in each new terminal session before executing deployment commands. All commands in this guide use these environment variables.
---
## β‘ Quick Start (TL;DR)
If you already have a Google Cloud project configured:
```bash
# 0. Load environment variables (required!)
source scripts/load-env.sh
# 1. Set your project
gcloud config set project $GCLOUD_PROJECT_ID
# 2. Enable APIs and create Artifact Registry
gcloud services enable cloudbuild.googleapis.com run.googleapis.com artifactregistry.googleapis.com
gcloud artifacts repositories create mcp-docker-repo --repository-format=docker --location=$GCLOUD_REGION
# 3. Grant permissions (see "Service Account Permissions" section below for details)
# 4. Deploy
gcloud builds submit --config cloudbuild.yaml
# 5. Get your service URL
gcloud run services describe mcp-timezone-server --region $GCLOUD_REGION --format='value(status.url)'
```
**Important**: See the "Service Account Permissions" section below for required IAM permissions, or you'll encounter permission errors.
---
## Prerequisites
**IMPORTANT**: Before deploying, you must set up secrets in Google Cloud Secret Manager. See the detailed [Secrets Setup Guide](./SECRETS_SETUP.md) for complete instructions.
1. **Google Cloud Account** with billing enabled
2. **gcloud CLI** installed and configured
```bash
# Install gcloud CLI (if not already installed)
# Visit: https://cloud.google.com/sdk/docs/install
# Initialize and authenticate
gcloud init
gcloud auth login
```
3. **Project Setup**
```bash
# Load environment variables first!
source scripts/load-env.sh
# Create a new project (or use existing)
gcloud projects create $GCLOUD_PROJECT_ID --name="MCP Timezone Server"
# Set the project
gcloud config set project $GCLOUD_PROJECT_ID
# Enable required APIs (IMPORTANT: Use Artifact Registry, not deprecated Container Registry)
gcloud services enable cloudbuild.googleapis.com
gcloud services enable run.googleapis.com
gcloud services enable artifactregistry.googleapis.com
gcloud services enable storage-api.googleapis.com
# Create Artifact Registry repository
gcloud artifacts repositories create mcp-docker-repo \
--repository-format=docker \
--location=$GCLOUD_REGION \
--description="Docker repository for MCP server"
```
4. **Service Account Permissions**
Cloud Build needs proper permissions to deploy. Get your project number first:
```bash
# Get project number
PROJECT_NUMBER=$(gcloud projects describe $GCLOUD_PROJECT_ID --format="value(projectNumber)")
echo "Project Number: $PROJECT_NUMBER"
```
Then grant the necessary permissions:
```bash
# Grant Artifact Registry write permission
gcloud projects add-iam-policy-binding $GCLOUD_PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/artifactregistry.writer"
# Grant Cloud Run admin permission
gcloud projects add-iam-policy-binding $GCLOUD_PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/run.admin"
# Grant Storage admin permission
gcloud projects add-iam-policy-binding $GCLOUD_PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/storage.admin"
# Grant logging permission (optional, for better logs)
gcloud projects add-iam-policy-binding $GCLOUD_PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/logging.logWriter"
# Allow service account to act as itself
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT_NUMBER}-compute@developer.gserviceaccount.com \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser" \
--project=$GCLOUD_PROJECT_ID
# Grant permissions to Artifact Registry repository
gcloud artifacts repositories add-iam-policy-binding mcp-docker-repo \
--location=$GCLOUD_REGION \
--member="serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com" \
--role="roles/artifactregistry.writer"
```
**Note**: These permissions are required because Cloud Build uses service accounts to interact with Google Cloud services. Without these, your deployment will fail with permission errors.
## Deployment Methods
### Method 1: Automated Deployment with Cloud Build (Recommended)
This method uses the provided `cloudbuild.yaml` file to automate the entire build and deployment process.
```bash
# Deploy from your project directory
gcloud builds submit --config cloudbuild.yaml
# The deployment will:
# 1. Build the Docker image in the cloud
# 2. Push to Google Artifact Registry (europe-west2)
# 3. Deploy to Cloud Run (europe-west2)
# 4. Return the HTTPS service URL
```
**First-time deployment?** Make sure you've:
- Created the Artifact Registry repository (see Prerequisites)
- Granted service account permissions (see Prerequisites)
- Otherwise you'll encounter permission errors!
**Customize deployment settings** by editing `cloudbuild.yaml`:
- `--region`: Change deployment region (e.g., `us-east1`, `europe-west1`)
- `--memory`: Adjust memory allocation (e.g., `256Mi`, `1Gi`)
- `--cpu`: Set CPU allocation (e.g., `1`, `2`, `4`)
- `--min-instances`: Minimum number of instances (0 for scale-to-zero)
- `--max-instances`: Maximum number of instances
- `--allow-unauthenticated`: Set to allow public access (remove for private)
### Method 2: Manual Deployment
Step-by-step manual deployment for more control:
#### Step 1: Build the Docker Image
```bash
# Build the image locally
docker build -t ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest .
# Test locally (optional but recommended!)
docker run -p 8080:8080 -e PORT=8080 ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest
# In another terminal, verify it works
curl http://localhost:8080/health
```
#### Step 2: Push to Artifact Registry
```bash
# Configure Docker to authenticate with Artifact Registry
gcloud auth configure-docker ${GCLOUD_REGION}-docker.pkg.dev
# Push the image
docker push ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest
```
#### Step 3: Deploy to Cloud Run
```bash
gcloud run deploy mcp-timezone-server \
--image ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest \
--region $GCLOUD_REGION \
--platform managed \
--allow-unauthenticated \
--port 8080 \
--memory 512Mi \
--cpu 1 \
--min-instances 0 \
--max-instances 10 \
--timeout 300
```
## After Deployment
### Get Your Service URL
```bash
# Get the service URL
gcloud run services describe mcp-timezone-server \
--region $GCLOUD_REGION \
--format 'value(status.url)'
# Example output: https://mcp-timezone-server-xxxxx-nw.a.run.app
```
### Test Your Deployment
```bash
# Save the URL to a variable
SERVICE_URL=$(gcloud run services describe mcp-timezone-server --region $GCLOUD_REGION --format 'value(status.url)')
# Health check
curl $SERVICE_URL/health
# Get service info (shows endpoints and tools)
curl $SERVICE_URL/ | jq .
# Test MCP endpoint URL (copy this for Claude Desktop)
echo "$SERVICE_URL/mcp"
```
### Configure Claude Desktop
1. Open Claude Desktop
2. Go to **Settings** β **Connectors** β **Add Custom Connector**
3. Enter your Cloud Run URL with `/mcp` path:
```
https://YOUR_SERVICE_URL/mcp
```
4. Save and test the connection
## Environment Variables and Secrets
The application uses a **hybrid configuration strategy**:
### Non-Sensitive Environment Variables
Set directly in `cloudbuild.yaml`:
- `MCP_TRANSPORT=http`
- `JWT_EXPIRES_IN=3600`
- `OAUTH2_AUTHORIZATION_URL`, `OAUTH2_TOKEN_URL`, `OAUTH2_USER_INFO_URL`
- `OAUTH2_SCOPE=openid email profile`
- `OAUTH2_CALLBACK_URL` (production URL)
### Sensitive Values (Google Secret Manager)
**Required secrets** - must be created before deployment:
- `jwt-secret` - JWT signing key
- `oauth2-client-id` - OAuth2 client ID
- `oauth2-client-secret` - OAuth2 client secret
- `allowed-emails` - Comma-separated list of allowed emails
**Setup secrets before first deployment:**
```bash
# Enable Secret Manager API (first time only)
gcloud services enable secretmanager.googleapis.com
# Create secrets (see docs/SECRETS_SETUP.md for complete instructions)
echo -n "$(openssl rand -hex 32)" | gcloud secrets create jwt-secret --data-file=-
echo -n "YOUR_CLIENT_ID" | gcloud secrets create oauth2-client-id --data-file=-
echo -n "YOUR_CLIENT_SECRET" | gcloud secrets create oauth2-client-secret --data-file=-
echo -n "user@example.com" | gcloud secrets create allowed-emails --data-file=-
# Grant Cloud Run access to secrets
PROJECT_NUMBER=$(gcloud projects describe $GCLOUD_PROJECT_ID --format="value(projectNumber)")
for secret in jwt-secret oauth2-client-id oauth2-client-secret allowed-emails; do
gcloud secrets add-iam-policy-binding $secret \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor"
done
```
**Update secrets:**
```bash
echo -n "NEW_VALUE" | gcloud secrets versions add SECRET_NAME --data-file=-
# Then redeploy: pnpm deploy
```
**For complete documentation**, see [SECRETS_SETUP.md](./SECRETS_SETUP.md)
## Monitoring and Logs
### View Logs
#### Real-time Log Tailing (Recommended)
Stream logs as they happen using the beta logging tail command:
```bash
# Tail all Cloud Run logs in real-time
gcloud beta logging tail "resource.type=cloud_run_revision AND resource.labels.service_name=mcp-timezone-server" --project=$GCLOUD_PROJECT_ID
# Tail only ERROR severity and above
gcloud beta logging tail "resource.type=cloud_run_revision AND resource.labels.service_name=mcp-timezone-server AND severity>=ERROR" --project=$GCLOUD_PROJECT_ID
# Tail with custom format (simpler output)
gcloud beta logging tail "resource.type=cloud_run_revision AND resource.labels.service_name=mcp-timezone-server" --format="value(timestamp,severity,textPayload)" --project=$GCLOUD_PROJECT_ID
```
**Note**: Requires beta components. Install if needed:
```bash
gcloud components install beta
```
#### View Historical Logs (Non-streaming)
For viewing past logs without real-time streaming:
```bash
# View recent logs (default: last 24 hours)
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=mcp-timezone-server" \
--limit=50 \
--format="table(timestamp,severity,textPayload)" \
--project=$GCLOUD_PROJECT_ID
# View logs from the last hour
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=mcp-timezone-server" \
--limit=100 \
--freshness=1h \
--project=$GCLOUD_PROJECT_ID
# Filter for errors only
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=mcp-timezone-server AND severity>=ERROR" \
--limit=100 \
--project=$GCLOUD_PROJECT_ID
```
### View Metrics
```bash
# View metrics in Cloud Console
# Visit: https://console.cloud.google.com/run/detail/${GCLOUD_REGION}/mcp-timezone-server
# Or open it directly
echo "https://console.cloud.google.com/run/detail/${GCLOUD_REGION}/mcp-timezone-server?project=$GCLOUD_PROJECT_ID"
```
Or visit: [Cloud Run Console](https://console.cloud.google.com/run)
## Security Considerations
### Option 1: Public Access (Current Setup)
The service is publicly accessible. Anyone can use the MCP tools.
### Option 2: Authenticated Access
Remove `--allow-unauthenticated` flag and require authentication:
```bash
gcloud run deploy mcp-timezone-server \
--image ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest \
--region $GCLOUD_REGION \
--platform managed \
--no-allow-unauthenticated \
--port 8080
```
Then configure authentication in Claude Desktop with a service account key.
**Note**: Authenticated Cloud Run services require:
- Service account with proper IAM roles
- Authentication token in requests
- More complex client configuration
### Option 3: VPC or Identity-Aware Proxy
For enterprise deployments, consider:
- [Cloud Run VPC connector](https://cloud.google.com/run/docs/configuring/connecting-vpc)
- [Identity-Aware Proxy (IAP)](https://cloud.google.com/iap/docs/enabling-cloud-run)
## Cost Optimization
Cloud Run pricing is based on:
- **CPU/Memory usage** (per 100ms)
- **Requests** (per million)
- **Networking** (egress)
### Tips to reduce costs:
1. **Scale to Zero**: Keep `--min-instances 0` for development
2. **Right-size resources**: Start with `512Mi` memory and `1` CPU
3. **Set request timeout**: Use `--timeout 300` (5 minutes max)
4. **Use regional deployment**: Avoid global load balancing unless needed
**Free Tier**: Cloud Run includes 2 million requests/month free.
## Updating Your Deployment
### Quick Update (using Cloud Build)
```bash
# Make your code changes, then:
gcloud builds submit --config cloudbuild.yaml
# That's it! Cloud Build will:
# 1. Build new image with unique tag (using BUILD_ID)
# 2. Push to Artifact Registry
# 3. Deploy to Cloud Run (zero-downtime)
# 4. Old version continues serving until new version is ready
```
### Manual Update
```bash
# Rebuild image
docker build -t ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest .
# Test locally first (recommended!)
docker run -p 8080:8080 -e PORT=8080 ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest
# Push to Artifact Registry
docker push ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest
# Deploy new version (Cloud Run does rolling update automatically)
gcloud run deploy mcp-timezone-server \
--image ${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest \
--region $GCLOUD_REGION
```
## Troubleshooting
### Issue: Container Registry Deprecated Error
**Error**: `Container Registry is deprecated and shutting down`
**Solution**: Use Artifact Registry instead of Container Registry (gcr.io). The `cloudbuild.yaml` in this project is already configured to use Artifact Registry.
```bash
# Create Artifact Registry repository if not exists
gcloud artifacts repositories create mcp-docker-repo \
--repository-format=docker \
--location=$GCLOUD_REGION \
--description="Docker repository for MCP server"
```
### Issue: Permission Denied - uploadArtifacts
**Error**: `Permission "artifactregistry.repositories.uploadArtifacts" denied`
**Solution**: Grant the Cloud Build service account permission to write to Artifact Registry:
```bash
PROJECT_NUMBER=$(gcloud projects describe $GCLOUD_PROJECT_ID --format="value(projectNumber)")
# Grant to compute service account
gcloud projects add-iam-policy-binding $GCLOUD_PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/artifactregistry.writer"
# Also grant to Cloud Build service account
gcloud artifacts repositories add-iam-policy-binding mcp-docker-repo \
--location=$GCLOUD_REGION \
--member="serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com" \
--role="roles/artifactregistry.writer"
```
### Issue: Permission Denied - run.services.get
**Error**: `Permission 'run.services.get' denied on resource`
**Solution**: Grant Cloud Run admin permissions to the service account:
```bash
PROJECT_NUMBER=$(gcloud projects describe $GCLOUD_PROJECT_ID --format="value(projectNumber)")
gcloud projects add-iam-policy-binding $GCLOUD_PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/run.admin"
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT_NUMBER}-compute@developer.gserviceaccount.com \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser" \
--project=$GCLOUD_PROJECT_ID
```
### Issue: Container fails to start - MODULE_NOT_FOUND
**Error**: `Cannot find module '/app/dist/src/mcp-http-main.js'`
**Solution**: The NestJS build output structure may vary. Check the actual build output:
```bash
# Build locally to verify
pnpm run build
ls -la dist/
# If files are in dist/ (not dist/src/), update Dockerfile CMD:
CMD ["node", "dist/mcp-http-main.js"] # Not dist/src/mcp-http-main.js
```
This has already been fixed in the provided `Dockerfile`.
### Issue: Container fails health check / PORT binding
**Error**: `The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable`
**Solution**: Ensure your application reads the `PORT` environment variable:
```typescript
// CORRECT: Read PORT from environment
const PORT = parseInt(process.env.PORT || '8080', 10);
// WRONG: Hardcoded port
const PORT = 3001;
```
This has already been fixed in `src/mcp-http-main.ts`.
### Issue: URLs show http:// instead of https://
**Error**: Service returns `http://` URLs in responses instead of `https://`
**Solution**: Check the `x-forwarded-proto` header, as Cloud Run terminates SSL:
```typescript
// Check x-forwarded-proto for the actual external protocol
const protocol = req.get('x-forwarded-proto') || req.protocol;
const baseUrl = protocol + '://' + req.get('host');
```
This has already been fixed in `src/mcp-http-main.ts`.
### Issue: Cloud Build storage permission errors
**Error**: `does not have storage.objects.get access to the Google Cloud Storage object`
**Solution**: Grant storage access to the Cloud Build service account:
```bash
PROJECT_NUMBER=$(gcloud projects describe $GCLOUD_PROJECT_ID --format="value(projectNumber)")
# Grant storage admin to compute service account
gcloud projects add-iam-policy-binding $GCLOUD_PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/storage.admin"
# Grant access to the Cloud Build bucket
gsutil iam ch serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com:objectAdmin \
gs://${GCLOUD_PROJECT_ID}_cloudbuild
```
### Issue: Service returns 503
```bash
# Check if service is ready
gcloud run services describe mcp-timezone-server --region $GCLOUD_REGION
# Check logs for errors
gcloud run services logs read mcp-timezone-server --region $GCLOUD_REGION --limit 100
# Increase memory or CPU if needed
gcloud run services update mcp-timezone-server \
--region $GCLOUD_REGION \
--memory 1Gi \
--cpu 2
```
### Issue: Slow cold starts
```bash
# Set minimum instances to 1 (keeps at least one instance warm)
gcloud run services update mcp-timezone-server \
--region $GCLOUD_REGION \
--min-instances 1
# Note: This will increase costs as you'll always have 1 instance running
```
### Issue: View detailed build logs
```bash
# View logs for a specific build
gcloud builds log BUILD_ID --stream
# Or view in Cloud Logging
gcloud logging read "resource.type=build AND resource.labels.build_id=BUILD_ID" \
--limit=100 \
--format="value(textPayload)"
# View Cloud Run revision logs
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=mcp-timezone-server" \
--limit=50 \
--format="value(textPayload,jsonPayload.message)"
```
## Cleanup
### Delete the service
```bash
# Delete Cloud Run service
gcloud run services delete mcp-timezone-server --region $GCLOUD_REGION
# Delete container images from Artifact Registry (optional)
gcloud artifacts docker images delete \
${GCLOUD_REGION}-docker.pkg.dev/${GCLOUD_PROJECT_ID}/mcp-docker-repo/mcp-timezone-server:latest
# Or delete entire repository
gcloud artifacts repositories delete mcp-docker-repo --location=$GCLOUD_REGION
# Clean up Cloud Build artifacts (optional)
gsutil -m rm -r gs://${GCLOUD_PROJECT_ID}_cloudbuild/
```
## Key Lessons Learned
This deployment guide incorporates real-world fixes and best practices learned during the initial deployment:
### 1. **Artifact Registry vs Container Registry**
- **Always use Artifact Registry** (`europe-west2-docker.pkg.dev`)
- Container Registry (`gcr.io`) is deprecated and will shut down
- The `cloudbuild.yaml` includes an authentication step for Artifact Registry
### 2. **Service Account Permissions are Critical**
- Cloud Build uses the compute service account by default
- You must explicitly grant permissions for: Artifact Registry, Cloud Run, Storage, and Logging
- Missing permissions cause cryptic error messages - see Troubleshooting section
### 3. **NestJS Build Output Structure**
- NestJS build output is in `dist/` not `dist/src/` when built in Docker
- Dockerfile CMD must use `dist/mcp-http-main.js` (not `dist/src/`)
- Always test your Docker build locally first: `docker build -t test . && docker run -p 8080:8080 -e PORT=8080 test`
### 4. **Cloud Run PORT Environment Variable**
- Cloud Run injects the `PORT` environment variable (usually 8080)
- Your app **must** read `process.env.PORT` - hardcoded ports won't work
- The server won't start if it doesn't listen on the correct port
### 5. **HTTPS URL Detection**
- Cloud Run terminates SSL, so `req.protocol` returns `http`
- Check `req.get('x-forwarded-proto')` to get the external protocol (`https`)
- This ensures API responses show correct `https://` URLs
### 6. **dockerignore Configuration**
- **Do NOT exclude** `pnpm-lock.yaml` or `package.json` - they're needed for builds
- **Do exclude**: `node_modules/`, `dist/`, `.git/`, test files, and documentation
- Missing lock files cause dependency version mismatches
### 7. **Build vs Runtime Dependencies**
- Use multi-stage Docker builds to keep production image small
- Install all dependencies in builder stage
- Install only production dependencies (`--prod`) in runtime stage
- Ensure all runtime dependencies (like NestJS packages, passport, etc.) are in `dependencies`, not `devDependencies`
### 8. **Debugging Failed Deployments**
```bash
# View Cloud Build logs
gcloud logging read "resource.type=build AND resource.labels.build_id=BUILD_ID" --limit=100
# View Cloud Run revision logs (when container fails to start)
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=SERVICE_NAME" --limit=50
# Test container locally before deploying
docker run -p 8080:8080 -e PORT=8080 IMAGE_NAME
```
### 9. **Region Consistency**
- Keep everything in the same region (Artifact Registry, Cloud Run)
- We use `europe-west2` (London) throughout this setup
- Cross-region pulls work but are slower and more expensive
### 10. **Zero Downtime Deployments**
- Cloud Run automatically does rolling updates
- Old version keeps serving traffic until new version is ready
- Set `--min-instances 1` for production to eliminate cold starts (costs more)
## Additional Resources
- [Cloud Run Documentation](https://cloud.google.com/run/docs)
- [Cloud Build Documentation](https://cloud.google.com/build/docs)
- [Artifact Registry Documentation](https://cloud.google.com/artifact-registry/docs)
- [MCP Protocol Specification](https://modelcontextprotocol.io/)
- [Pricing Calculator](https://cloud.google.com/products/calculator)
- [NestJS Docker Best Practices](https://docs.nestjs.com/recipes/docker)
## Support
For issues specific to:
- **Google Cloud Run**: [Cloud Run Support](https://cloud.google.com/run/docs/support)
- **MCP Protocol**: [MCP GitHub](https://github.com/modelcontextprotocol)
- **This Project**: Create an issue in your repository
---
**Note**: This guide uses environment variables (`$GCLOUD_PROJECT_ID` and `$GCLOUD_REGION`) throughout. Make sure to load them with `source scripts/load-env.sh` before running commands.