# Komodo MCP Server - Real-World Examples
Practical examples demonstrating common Komodo workflows using the MCP server with Claude.
## Table of Contents
1. [Basic Server Management](#basic-server-management)
2. [Application Deployment](#application-deployment)
3. [CI/CD Integration](#cicd-integration)
4. [Monitoring and Alerts](#monitoring-and-alerts)
5. [Troubleshooting Workflows](#troubleshooting-workflows)
6. [Multi-Environment Management](#multi-environment-management)
7. [Automation Scenarios](#automation-scenarios)
---
## Basic Server Management
### Example 1: Server Health Check
**Scenario**: Check the health of all production servers.
**Conversation with Claude**:
```
User: Check the health of all my production servers
Claude: I'll check the status of your production servers.
[Uses komodo_read_ListServers with filtering]
Here's what I found:
Production Servers (5 total):
1. production-api (srv_abc123) - ✅ Running
- CPU: 45%, Memory: 2.1GB/4GB
- Uptime: 15d 4h 30m
2. production-web (srv_def456) - ✅ Running
- CPU: 30%, Memory: 1.8GB/4GB
- Uptime: 15d 4h 25m
3. production-db (srv_ghi789) - ⚠️ High CPU
- CPU: 92%, Memory: 6.5GB/8GB
- Uptime: 45d 2h 10m
4. production-cache (srv_jkl012) - ✅ Running
- CPU: 20%, Memory: 512MB/2GB
- Uptime: 45d 2h 08m
5. production-worker (srv_mno345) - ✅ Running
- CPU: 55%, Memory: 3.2GB/4GB
- Uptime: 8d 12h 45m
Alert: production-db (srv_ghi789) is experiencing high CPU usage (92%).
Would you like me to investigate or restart the server?
```
---
### Example 2: Restart Problematic Server
**Conversation with Claude**:
```
User: Yes, please restart the database server
Claude: I'll restart the production database server (srv_ghi789).
[Uses komodo_execute_RestartServer]
Restart initiated for production-db:
- Status: Restarting
- Timeout: 120 seconds
- Waiting for healthy status
... (waits 30 seconds) ...
[Uses komodo_read_GetServer to check status]
Restart completed successfully! ✅
Production-db (srv_ghi789):
- Status: Running
- CPU: 35% (down from 92%)
- Memory: 4.2GB/8GB
- Uptime: 2m 15s
- Health: Healthy
The database server has been restarted and is now operating normally.
```
---
## Application Deployment
### Example 3: Deploy New Application Version
**Scenario**: Deploy a new version of the API application.
**Conversation with Claude**:
```
User: I need to deploy version v2.2.0 of the API app
Claude: I'll help you deploy v2.2.0 of the API. Let me first check the current deployment status.
[Uses komodo_read_ListDeployments with filtering]
Current API Deployments:
- api-v2 (dep_456def) on production-api
- Current version: v2.1.0
- Status: Running
- Uptime: 6d 2h 15m
Do you want me to:
1. Build the new version first, or
2. Deploy an existing v2.2.0 image?
User: Build it first
Claude: I'll trigger a build for v2.2.0.
[Uses komodo_read_GetDeployment to find build config]
[Uses komodo_execute_Build]
Build started:
- Build ID: bld_xyz789
- Build Number: #43
- Branch: main
- Commit: abc123def (v2.2.0 tag)
- Status: Building
... (monitors build progress) ...
[Periodically checks build status]
Build completed successfully! ✅
- Duration: 3m 24s
- Image: myregistry.io/myapp:v2.2.0
- Size: 485MB
Now deploying to production...
[Uses komodo_execute_Deploy with stopBeforeStart option]
Deployment in progress:
- Deployment: api-v2 (dep_456def)
- Image: myregistry.io/myapp:v2.2.0
- Steps:
✅ Pulling image
✅ Stopping container
✅ Creating new container
✅ Starting container
⏳ Health check in progress...
... (waits for health check) ...
Deployment successful! ✅
- Version: v2.2.0
- Status: Running
- Health: Healthy
- Deployment time: 2m 45s
The API has been updated to v2.2.0 and is running smoothly.
```
---
### Example 4: Rollback Deployment
**Conversation with Claude**:
```
User: There's an issue with v2.2.0, roll it back to v2.1.0
Claude: I'll immediately roll back the API to v2.1.0.
[Uses komodo_read_GetDeployment to check deployment history]
Previous version: v2.1.0 (image: myregistry.io/myapp:v2.1.0)
Rolling back...
[Uses komodo_execute_Deploy with specific image tag]
Rollback in progress:
- From: v2.2.0
- To: v2.1.0
- Steps:
✅ Pulling image (v2.1.0)
✅ Stopping container
✅ Creating new container
✅ Starting container
✅ Health check passed
Rollback completed! ✅
- Version: v2.1.0
- Status: Running
- Health: Healthy
- Rollback time: 1m 58s
The API has been successfully rolled back to v2.1.0. What was the issue with v2.2.0?
```
---
## CI/CD Integration
### Example 5: Automated Deploy After Build
**Scenario**: Automatically deploy after successful build triggered by git push.
**Conversation with Claude**:
```
User: Check if there are any new builds ready to deploy
Claude: Let me check for recent successful builds.
[Uses komodo_read_ListBuilds with status filter]
Recent Successful Builds:
1. api-build (bld_xyz789) - Ready to Deploy
- Build #44
- Branch: main
- Commit: def456abc (feat: add user authentication)
- Built: 5 minutes ago
- Image: myregistry.io/myapp:v2.3.0
2. web-build (bld_abc123) - Already Deployed
- Build #28
- Branch: main
- Commit: ghi789jkl
- Built: 2 hours ago
Build #44 is ready. Would you like me to deploy v2.3.0 to staging first for testing?
User: Yes, deploy to staging
Claude: Deploying v2.3.0 to staging environment.
[Uses komodo_read_ListDeployments with staging filter]
[Uses komodo_execute_Deploy]
Staging Deployment:
- Deployment: api-staging (dep_staging_123)
- Version: v2.3.0
- Status: Deploying...
✅ Pulling image
✅ Stopping old container
✅ Starting new container
✅ Health check passed
Staging deployment successful! ✅
Testing checklist:
- [ ] Verify new authentication endpoints
- [ ] Run integration tests
- [ ] Check error logs
- [ ] Monitor performance
Would you like me to:
1. Run automated tests, or
2. Wait for manual verification before deploying to production?
```
---
## Monitoring and Alerts
### Example 6: Investigating Critical Alert
**Scenario**: Respond to a critical alert about high memory usage.
**Conversation with Claude**:
```
User: What critical alerts do I have?
Claude: Checking for critical alerts...
[Uses komodo_read_ListAlerts with severity filter]
You have 1 critical alert:
🚨 High Memory Usage on production-worker (alt_789ghi)
- Severity: Critical
- Resource: Server srv_mno345 (production-worker)
- Triggered: 10 minutes ago
- Message: Memory usage has exceeded 90% for 15 minutes
- Current: 3.8GB / 4GB (95%)
- Threshold: 90%
Let me investigate the server.
[Uses komodo_read_GetServer]
Server Details:
- Name: production-worker
- Status: Running
- CPU: 75%
- Memory: 3.8GB / 4GB (95%)
- Uptime: 8d 12h 55m
Deployments on this server:
1. worker-queue (dep_worker_123) - Running
- Memory: 2.1GB
2. worker-jobs (dep_worker_456) - Running
- Memory: 1.7GB
Analysis:
- Total deployment memory: 3.8GB
- Server capacity: 4GB
- Utilization: 95%
Recommendations:
1. Restart deployments to clear memory leaks
2. Increase server memory to 8GB
3. Move one deployment to another server
What would you like to do?
User: Restart the deployments
Claude: I'll restart both worker deployments to clear any memory leaks.
[Uses komodo_execute_Deploy with restart for both deployments]
Restarting worker-queue...
✅ Restart complete (1m 20s)
- Memory: 1.1GB (down from 2.1GB)
Restarting worker-jobs...
✅ Restart complete (1m 15s)
- Memory: 890MB (down from 1.7GB)
[Uses komodo_read_GetServer to verify]
Server Status After Restart:
- Memory: 2.0GB / 4GB (50%)
- CPU: 45%
- Health: Healthy
The memory issue has been resolved! ✅
Total memory usage dropped from 95% to 50%.
Would you like me to monitor this server for the next hour to ensure stability?
```
---
### Example 7: Proactive Monitoring
**Conversation with Claude**:
```
User: Give me a complete infrastructure health report
Claude: Generating comprehensive infrastructure health report...
[Uses multiple read tools in parallel]
[komodo_read_ListServers]
[komodo_read_ListDeployments]
[komodo_read_ListAlerts]
Infrastructure Health Report
Generated: 2026-01-26 15:30:00 UTC
SERVERS (5 total)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Running: 5
❌ Stopped: 0
⚠️ Issues: 0
Server Details:
1. production-api - ✅ Healthy (CPU: 45%, Mem: 52%)
2. production-web - ✅ Healthy (CPU: 30%, Mem: 45%)
3. production-db - ✅ Healthy (CPU: 35%, Mem: 53%)
4. production-cache - ✅ Healthy (CPU: 20%, Mem: 26%)
5. production-worker - ✅ Healthy (CPU: 45%, Mem: 50%)
DEPLOYMENTS (12 total)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Running: 12
❌ Stopped: 0
🔄 Deploying: 0
❌ Failed: 0
Recent Deployments (Last 24h):
- api-v2 → v2.3.0 (2 hours ago) - ✅ Success
- worker-queue (restarted 20 minutes ago) - ✅ Success
- worker-jobs (restarted 20 minutes ago) - ✅ Success
ALERTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚨 Critical: 0
⚠️ Warning: 2
ℹ️ Info: 1
Active Warnings:
1. Disk space at 75% on production-db
- Triggered: 2 hours ago
- Threshold: 80%
- Current: 75GB / 100GB
2. Build queue at 80% capacity
- Triggered: 30 minutes ago
- Pending builds: 4
SYSTEM HEALTH SCORE: 92/100 ✅
Summary:
- All servers operational
- All deployments running normally
- Memory issue on worker resolved
- 2 minor warnings requiring attention
Recommendations:
1. Clean up old backups on production-db to free disk space
2. Process pending build queue
3. Consider scaling worker server memory to 8GB
Everything looks good! No urgent actions required.
```
---
## Troubleshooting Workflows
### Example 8: Debugging Failed Deployment
**Scenario**: A deployment failed and you need to diagnose the issue.
**Conversation with Claude**:
```
User: My api-staging deployment failed, what happened?
Claude: Let me investigate the failed deployment.
[Uses komodo_read_GetDeployment]
Deployment: api-staging (dep_staging_123)
- Status: Failed
- Last attempt: 5 minutes ago
- Error: Container failed health check
Deployment History:
- Attempt #1 (5 min ago): Failed - Health check timeout
- Attempt #2 (10 min ago): Failed - Health check timeout
- Previous successful: v2.2.0 (2 hours ago)
Let me check the server and deployment logs...
[Uses komodo_read_GetServer]
Server: staging-api (srv_staging_001)
- Status: Running
- CPU: 25%
- Memory: 1.2GB / 4GB
- No server-level issues
Diagnosis:
The deployment container is starting but failing health checks. This typically means:
1. Application is crashing on startup
2. Health check endpoint not responding
3. Port configuration issue
4. Environment variable problem
Let me check the deployment configuration...
[Shows deployment config from previous Get call]
Configuration:
- Image: myregistry.io/myapp:v2.3.0
- Health check: GET /health
- Health check timeout: 10s
- Ports: 8080:80
Possible Issues:
1. The /health endpoint may have changed in v2.3.0
2. Port 8080 might be misconfigured
3. Missing environment variables
Recommendations:
1. Check build logs for v2.3.0
2. Verify health endpoint in new version
3. Try deploying with extended health check timeout
Would you like me to:
A) Roll back to v2.2.0, or
B) Try deploying with extended timeout?
User: Try extended timeout first
Claude: I'll redeploy with extended health check timeout.
[Uses komodo_execute_Deploy with custom health check timeout]
Deploying with 30s health check timeout...
- Pulling image
- Starting container
- Waiting for health check (30s timeout)...
... (waits) ...
✅ Deployment successful!
The issue was the health check timeout. The new version takes ~15 seconds to initialize the database connection pool, which exceeded the default 10s timeout.
Recommendation: Update the default health check timeout for this deployment to 30s to prevent future issues.
Would you like me to update the deployment configuration?
```
---
## Multi-Environment Management
### Example 9: Progressive Deployment Across Environments
**Scenario**: Deploy a new feature progressively through staging → production.
**Conversation with Claude**:
```
User: Deploy v2.4.0 to staging, then production if successful
Claude: I'll deploy v2.4.0 progressively through your environments.
Phase 1: Building v2.4.0
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Uses komodo_execute_Build]
Build started for v2.4.0...
✅ Build complete (3m 15s)
- Image: myregistry.io/myapp:v2.4.0
- Tests: All passed
Phase 2: Staging Deployment
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Uses komodo_execute_Deploy on staging]
Deploying to staging-api (dep_staging_123)...
✅ Deployment successful (2m 10s)
- Version: v2.4.0
- Health: Healthy
Running staging validation tests...
[Simulated test execution]
Staging Tests:
✅ API endpoints responding
✅ Database connectivity OK
✅ Authentication working
✅ New feature functional
✅ Performance within limits
Staging validation: PASSED ✅
Phase 3: Production Deployment
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Proceeding to production deployment...
[Uses komodo_read_ListDeployments to find all production deployments]
Production deployments to update:
1. api-v2 on production-api-1 (dep_prod_1)
2. api-v2 on production-api-2 (dep_prod_2)
3. api-v2 on production-api-3 (dep_prod_3)
Deploying in rolling fashion (one at a time)...
Deploying to production-api-1...
[Uses komodo_execute_Deploy]
✅ Complete (2m 05s) - Health: Healthy
Waiting 2 minutes for stability check...
✅ No errors detected
Deploying to production-api-2...
✅ Complete (2m 08s) - Health: Healthy
Waiting 2 minutes for stability check...
✅ No errors detected
Deploying to production-api-3...
✅ Complete (2m 03s) - Health: Healthy
Final health check...
✅ All production instances healthy
Phase 4: Verification
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Uses komodo_read_ListDeployments and komodo_read_ListAlerts]
Deployment Summary:
- Staging: v2.4.0 (Healthy)
- Production-1: v2.4.0 (Healthy)
- Production-2: v2.4.0 (Healthy)
- Production-3: v2.4.0 (Healthy)
Active Alerts: None
Error Rate: 0.02% (normal)
Response Time: 145ms avg (good)
Progressive deployment complete! ✅
Total time: 18 minutes
All environments running v2.4.0 successfully.
```
---
## Automation Scenarios
### Example 10: Scheduled Maintenance Procedure
**Scenario**: Run a scheduled database backup procedure.
**Conversation with Claude**:
```
User: Run the daily backup procedure
Claude: Executing daily backup procedure...
[Uses komodo_read_GetProcedure to understand steps]
Procedure: daily-backup (proc_123abc)
Steps:
1. Execute database backup action
2. Wait 30 seconds
3. Upload to S3
4. Verify backup integrity
5. Clean old backups
Starting execution...
[Uses komodo_execute_RunProcedure]
Step 1/5: Database Backup
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Running backup action...
✅ Backup created: backup-2026-01-26.sql.gz
- Size: 2.3GB
- Duration: 45s
Step 2/5: Wait
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Waiting 30 seconds for file system sync...
✅ Complete
Step 3/5: Upload to S3
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Uploading to s3://prod-backups/2026/01/26/
✅ Upload complete
- Transfer: 2.3GB in 1m 45s
- Location: s3://prod-backups/2026/01/26/backup-2026-01-26.sql.gz
Step 4/5: Verify Integrity
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Checking backup file integrity...
✅ MD5 checksum verified
✅ File is not corrupted
Step 5/5: Cleanup
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Removing backups older than 30 days...
Deleted:
- backup-2025-12-27.sql.gz (2.1GB)
- backup-2025-12-26.sql.gz (2.0GB)
✅ Cleanup complete (freed 4.1GB)
Procedure Complete! ✅
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Duration: 5m 30s
Status: Success
Backup: backup-2026-01-26.sql.gz (2.3GB)
Location: s3://prod-backups/2026/01/26/
Retention: 30 days
The daily backup procedure completed successfully.
```
---
### Example 11: Infrastructure Scaling
**Scenario**: Handle traffic spike by scaling deployments.
**Conversation with Claude**:
```
User: We're seeing high traffic, can you scale up the web servers?
Claude: I'll check current load and scale the web servers.
[Uses komodo_read_ListServers and komodo_read_ListDeployments]
Current Web Infrastructure:
- production-web-1 (CPU: 85%, Memory: 75%)
- web-frontend (dep_web_1): Running
- production-web-2 (CPU: 80%, Memory: 70%)
- web-frontend (dep_web_2): Running
Both servers are under high load. I can:
1. Scale horizontally (add more servers)
2. Scale vertically (increase resources)
3. Both
For immediate relief, I recommend adding 2 more web server instances.
Note: This requires creating new deployments. Do you have spare servers or should I recommend using auto-scaling groups?
User: We have staging servers we can use temporarily
Claude: Great! I'll deploy web-frontend to your staging servers.
[Uses komodo_read_ListServers with staging tag]
Available Staging Servers:
- staging-web-1 (srv_staging_web_1) - Available
- staging-web-2 (srv_staging_web_2) - Available
Deploying web-frontend to staging servers...
[Uses komodo_execute_Deploy for each]
Deployment 1: staging-web-1
✅ Deployed web-frontend (dep_temp_1)
- Version: v2.4.0 (matching production)
- Status: Running
- Health: Healthy
Deployment 2: staging-web-2
✅ Deployed web-frontend (dep_temp_2)
- Version: v2.4.0
- Status: Running
- Health: Healthy
Scaled Web Infrastructure:
- production-web-1 (CPU: 85% → 60%)
- production-web-2 (CPU: 80% → 55%)
- staging-web-1 (new) (CPU: 15%)
- staging-web-2 (new) (CPU: 12%)
Load distributed across 4 instances! ✅
Recommendation: Update your load balancer to include:
- staging-web-1: <IP address>
- staging-web-2: <IP address>
When traffic normalizes, I can remove the staging instances.
```
---
## Code Integration Examples
### Example 12: Using MCP Server from TypeScript
For programmatic access, you can use the MCP SDK:
```typescript
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
// Initialize MCP client
const transport = new StdioClientTransport({
command: 'node',
args: ['path/to/komodo-mcp/dist/index.js'],
env: {
KOMODO_URL: 'https://komodo.example.com',
KOMODO_API_KEY: 'your-api-key',
KOMODO_API_SECRET: 'your-api-secret'
}
});
const client = new Client({
name: 'komodo-automation',
version: '1.0.0'
}, {
capabilities: {}
});
await client.connect(transport);
// List all servers
const servers = await client.callTool('komodo_read_ListServers', {
status: 'running'
});
console.log('Running servers:', servers);
// Deploy application
const deployment = await client.callTool('komodo_execute_Deploy', {
id: 'dep_456def',
options: {
stopBeforeStart: true
}
});
console.log('Deployment started:', deployment);
await client.close();
```
---
### Example 13: Integration with CI/CD Pipeline
GitHub Actions workflow example:
```yaml
name: Deploy to Komodo
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install Komodo MCP
run: |
npm install -g komodo-mcp
- name: Deploy to Staging
env:
KOMODO_URL: ${{ secrets.KOMODO_URL }}
KOMODO_API_KEY: ${{ secrets.KOMODO_API_KEY }}
KOMODO_API_SECRET: ${{ secrets.KOMODO_API_SECRET }}
run: |
# Use Claude CLI with Komodo MCP to deploy
claude -e "Deploy ${GITHUB_SHA} to staging"
- name: Run Tests
run: npm test
- name: Deploy to Production
if: success()
run: |
claude -e "Deploy ${GITHUB_SHA} to production"
```
---
## Tips for Effective Usage
1. **Be Specific**: Include resource IDs when you know them
- Good: "Deploy dep_456def to production"
- Less specific: "Deploy the API"
2. **Use Natural Language**: Claude understands context
- "Check if the database server is healthy"
- "Show me all failed deployments from today"
3. **Ask for Recommendations**: Claude can suggest actions
- "What should I do about the high CPU alert?"
- "How can I improve deployment reliability?"
4. **Chain Operations**: Claude can execute multi-step workflows
- "Build version 2.5.0, deploy to staging, then promote to production if tests pass"
5. **Request Reports**: Get comprehensive overviews
- "Give me a complete health report"
- "Show me all changes in the last 24 hours"