# DP-MCP CLI Reference
The DP-MCP CLI tool provides direct command-line access to all PostgreSQL and MinIO operations without requiring the MCP protocol overhead. This is perfect for testing, debugging, and scripting.
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [PostgreSQL Commands](#postgresql-commands)
- [MinIO Commands](#minio-commands)
- [Combined Operations](#combined-operations)
- [Advanced Usage](#advanced-usage)
- [Scripting Examples](#scripting-examples)
- [Troubleshooting](#troubleshooting)
## Installation
The CLI tool is included with your DP-MCP server installation. No additional setup required.
### Prerequisites
- Docker services running (`docker-compose up -d`)
- Python environment activated via `uv`
### Usage Methods
```bash
# Method 1: Direct Python execution
uv run python cli.py <command> [options]
# Method 2: Using wrapper script (recommended)
./dp-cli <command> [options]
# Method 3: Add to PATH (optional)
export PATH="$PATH:/path/to/da.dp-mcp"
dp-cli <command> [options]
```
## Quick Start
### Display Help
```bash
# Show all available commands
./dp-cli --help
# Show help for specific command
./dp-cli describe-table --help
```
### Test Connection
```bash
# Verify database connection
./dp-cli list-tables
# Verify MinIO connection
./dp-cli list-buckets
```
## PostgreSQL Commands
### `list-tables` - List Database Tables
**Syntax:**
```bash
./dp-cli list-tables [--schema SCHEMA]
```
**Parameters:**
- `--schema`: Database schema name (default: "public")
**Examples:**
```bash
# List all tables in public schema
./dp-cli list-tables
# List tables in specific schema
./dp-cli list-tables --schema analytics
# List tables in all schemas
./dp-cli list-tables --schema information_schema
```
**Output:**
```
🚀 Listing Database Tables
============================
Tables in schema 'public':
• users (TABLE)
• products (TABLE)
• orders (TABLE)
✅ Tables listed successfully
```
### `describe-table` - Describe Table Structure
**Syntax:**
```bash
./dp-cli describe-table TABLE_NAME [--schema SCHEMA]
```
**Parameters:**
- `TABLE_NAME`: Name of the table to describe (required)
- `--schema`: Database schema name (default: "public")
**Examples:**
```bash
# Describe users table
./dp-cli describe-table users
# Describe table in specific schema
./dp-cli describe-table sales --schema analytics
```
**Output:**
```
🚀 Describing Table: users
==========================
Table: public.users
Columns:
------------------------------------------------------------
id integer NOT NULL DEFAULT
username varchar(50) NOT NULL
email varchar(100) NOT NULL
full_name varchar(100) NULL
is_active boolean NULL DEFAULT
created_at timestamp NULL DEFAULT
✅ Table described successfully
```
### `query` - Execute SQL Query
**Syntax:**
```bash
./dp-cli query "SQL_QUERY" [--limit LIMIT]
```
**Parameters:**
- `SQL_QUERY`: SQL query to execute (required, in quotes)
- `--limit`: Maximum rows to return (default: 1000)
**Examples:**
```bash
# Simple SELECT query
./dp-cli query "SELECT * FROM users LIMIT 5"
# Aggregation query
./dp-cli query "SELECT COUNT(*) as total_users FROM users"
# Complex JOIN query
./dp-cli query "SELECT u.username, COUNT(o.id) as orders FROM users u LEFT JOIN orders o ON u.id = o.user_id GROUP BY u.username"
# Query with custom limit
./dp-cli query "SELECT * FROM large_table" --limit 100
# Data modification (use with caution)
./dp-cli query "UPDATE users SET last_login = NOW() WHERE id = 1"
```
**Output:**
```
🚀 Executing SQL Query
=======================
ℹ️ Query: SELECT username, email FROM users LIMIT 3
ℹ️ Limit: 1000
Query executed successfully. 3 rows returned:
username | email |
----------------------------------------------------------
john_doe | john@example.com |
jane_smith | jane@example.com |
bob_wilson | bob@example.com |
✅ Query executed successfully
```
### `export-csv` - Export Table Data
**Syntax:**
```bash
./dp-cli export-csv TABLE_NAME [--limit LIMIT] [--where WHERE_CLAUSE] [--output FILE]
```
**Parameters:**
- `TABLE_NAME`: Name of the table to export (required)
- `--limit`: Maximum rows to export (default: 10000)
- `--where`: SQL WHERE clause for filtering (without "WHERE" keyword)
- `--output`: Output file path (default: display to console)
**Examples:**
```bash
# Export entire table to console
./dp-cli export-csv users
# Export to file
./dp-cli export-csv users --output users_backup.csv
# Export with row limit
./dp-cli export-csv large_table --limit 1000 --output sample_data.csv
# Export with filtering
./dp-cli export-csv orders --where "order_date >= '2025-01-01'" --output recent_orders.csv
# Export active users only
./dp-cli export-csv users --where "is_active = true" --limit 500
```
**Output:**
```
🚀 Exporting Table: users
=========================
ℹ️ Limit: 100
ℹ️ Filter: is_active = true
id,username,email,full_name,is_active,created_at
1,john_doe,john@example.com,John Doe,t,2025-01-15 10:30:00
2,jane_smith,jane@example.com,Jane Smith,t,2025-01-16 11:45:00
✅ CSV exported to: users_active.csv
```
## MinIO Commands
### `list-buckets` - List All Buckets
**Syntax:**
```bash
./dp-cli list-buckets
```
**Examples:**
```bash
./dp-cli list-buckets
```
**Output:**
```
🚀 Listing MinIO Buckets
=========================
Available buckets:
• default-bucket (created: 2025-01-29 10:30:00)
• backups (created: 2025-01-29 10:30:01)
• data-lake (created: 2025-01-29 10:30:02)
✅ Buckets listed successfully
```
### `list-objects` - List Objects in Bucket
**Syntax:**
```bash
./dp-cli list-objects BUCKET_NAME [--prefix PREFIX] [--max-keys MAX_KEYS]
```
**Parameters:**
- `BUCKET_NAME`: Name of the bucket (required)
- `--prefix`: Object prefix to filter by
- `--max-keys`: Maximum objects to return (default: 1000)
**Examples:**
```bash
# List all objects in bucket
./dp-cli list-objects default-bucket
# List objects with prefix
./dp-cli list-objects backups --prefix "2025/01/"
# Limit number of objects
./dp-cli list-objects data-lake --max-keys 50
# List CSV files only
./dp-cli list-objects exports --prefix "csv/"
```
**Output:**
```
🚀 Listing Objects in Bucket: default-bucket
============================================
Objects in bucket 'default-bucket':
Name Size (MB) Last Modified
----------------------------------------------------------------------------------
test.txt 0.01 2025-01-29 15:30:00
data/users_20250129_143022.csv 1.25 2025-01-29 14:30:22
reports/monthly_summary.json 0.15 2025-01-29 12:15:30
✅ Objects listed successfully
```
### `upload` - Upload Data to MinIO
**Syntax:**
```bash
./dp-cli upload BUCKET_NAME OBJECT_NAME [DATA] [--file FILE] [--content-type TYPE]
```
**Parameters:**
- `BUCKET_NAME`: Destination bucket name (required)
- `OBJECT_NAME`: Object key/path (required)
- `DATA`: Text data to upload (use OR --file, not both)
- `--file`: Upload from file instead of command line
- `--content-type`: MIME type (default: "text/plain")
**Examples:**
```bash
# Upload text data
./dp-cli upload default-bucket hello.txt "Hello, World!"
# Upload from file
./dp-cli upload backups users_backup.csv --file /path/to/users.csv
# Upload JSON with correct content type
./dp-cli upload data-lake config.json --file config.json --content-type "application/json"
# Upload to nested path
./dp-cli upload backups "2025/01/29/daily_backup.sql" --file backup.sql
```
**Output:**
```
🚀 Uploading to MinIO
=====================
ℹ️ Bucket: default-bucket
ℹ️ Object: hello.txt
ℹ️ Content Type: text/plain
Successfully uploaded 'hello.txt' to bucket 'default-bucket'
✅ Upload completed successfully
```
### `download` - Download Object from MinIO
**Syntax:**
```bash
./dp-cli download BUCKET_NAME OBJECT_NAME [--output FILE]
```
**Parameters:**
- `BUCKET_NAME`: Source bucket name (required)
- `OBJECT_NAME`: Object key/path (required)
- `--output`: Output file path (default: display to console)
**Examples:**
```bash
# Download to console
./dp-cli download default-bucket hello.txt
# Download to file
./dp-cli download backups users_backup.csv --output restored_users.csv
# Download from nested path
./dp-cli download data-lake "reports/2025/summary.json" --output summary.json
```
**Output:**
```
🚀 Downloading from MinIO
=========================
ℹ️ Bucket: default-bucket
ℹ️ Object: hello.txt
--- Object Content ---
Hello, World!
--- End of Content ---
✅ Object downloaded and displayed
```
### `create-bucket` - Create New Bucket
**Syntax:**
```bash
./dp-cli create-bucket BUCKET_NAME [--region REGION]
```
**Parameters:**
- `BUCKET_NAME`: Name for the new bucket (required)
- `--region`: AWS region specification (optional)
**Examples:**
```bash
# Create simple bucket
./dp-cli create-bucket my-new-bucket
# Create bucket with region
./dp-cli create-bucket analytics-data --region us-west-2
# Create backup bucket
./dp-cli create-bucket weekly-backups
```
### `delete-object` - Delete Object
**Syntax:**
```bash
./dp-cli delete-object BUCKET_NAME OBJECT_NAME [--yes]
```
**Parameters:**
- `BUCKET_NAME`: Source bucket name (required)
- `OBJECT_NAME`: Object key/path (required)
- `--yes`: Skip confirmation prompt
**Examples:**
```bash
# Delete with confirmation
./dp-cli delete-object default-bucket old-file.txt
# Delete without confirmation
./dp-cli delete-object temp-bucket temp-file.log --yes
# Delete from nested path
./dp-cli delete-object backups "old/2024/backup.sql" --yes
```
## Combined Operations
### `backup-table` - Backup Table to MinIO
**Syntax:**
```bash
./dp-cli backup-table TABLE_NAME [--bucket BUCKET] [--schema SCHEMA] [--limit LIMIT] [--where WHERE_CLAUSE]
```
**Parameters:**
- `TABLE_NAME`: Name of the table to backup (required)
- `--bucket`: Destination bucket (default: "backups")
- `--schema`: Database schema (default: "public")
- `--limit`: Maximum rows to backup (default: 10000)
- `--where`: SQL WHERE clause for filtering
**Examples:**
```bash
# Simple table backup
./dp-cli backup-table users
# Backup to specific bucket
./dp-cli backup-table users --bucket daily-backups
# Backup with filtering
./dp-cli backup-table orders --where "order_date >= '2025-01-01'" --bucket monthly-archives
# Large table backup with limit
./dp-cli backup-table transactions --limit 100000 --bucket big-data-backups
# Backup from specific schema
./dp-cli backup-table sales_data --schema analytics --bucket analytics-backups
```
**Output:**
```
🚀 Backing up Table: users
==========================
ℹ️ Destination bucket: backups
ℹ️ Schema: public
ℹ️ Limit: 10000
ℹ️ Exporting table data...
ℹ️ Uploading to backups/public_users_20250129_153045.csv...
Successfully uploaded 'public_users_20250129_153045.csv' to bucket 'backups'
✅ Table backed up successfully to backups/public_users_20250129_153045.csv
```
## Advanced Usage
### Piping and Redirection
```bash
# Save query results to file
./dp-cli query "SELECT * FROM users" > users_output.txt
# Chain commands
./dp-cli export-csv users --limit 100 | head -10
# Count exported rows
./dp-cli export-csv orders | wc -l
```
### Environment Variables
```bash
# Override default limits
export DEFAULT_QUERY_LIMIT=5000
./dp-cli query "SELECT * FROM large_table"
# Custom timeouts
export DB_TIMEOUT=120
./dp-cli query "SELECT * FROM slow_view"
```
### Batch Operations
```bash
# Backup multiple tables
for table in users products orders; do
./dp-cli backup-table $table --bucket nightly-backups
done
# Export all tables
./dp-cli list-tables | grep "•" | awk '{print $2}' | while read table; do
./dp-cli export-csv $table --output "${table}.csv"
done
```
## Scripting Examples
### Daily Backup Script
```bash
#!/bin/bash
# daily_backup.sh
DATE=$(date +%Y%m%d)
BUCKET="daily-backups-${DATE}"
echo "Creating backup bucket..."
./dp-cli create-bucket $BUCKET
echo "Backing up critical tables..."
./dp-cli backup-table users --bucket $BUCKET
./dp-cli backup-table orders --bucket $BUCKET
./dp-cli backup-table products --bucket $BUCKET
echo "Backup completed to bucket: $BUCKET"
```
### Data Migration Script
```bash
#!/bin/bash
# migrate_data.sh
SOURCE_TABLE="old_users"
TEMP_FILE="migration_data.csv"
echo "Exporting data from $SOURCE_TABLE..."
./dp-cli export-csv $SOURCE_TABLE --output $TEMP_FILE
echo "Uploading to MinIO..."
./dp-cli upload migration-data $TEMP_FILE --file $TEMP_FILE
echo "Cleaning up..."
rm $TEMP_FILE
echo "Migration data uploaded successfully"
```
### Health Check Script
```bash
#!/bin/bash
# health_check.sh
echo "Checking database connectivity..."
if ./dp-cli list-tables > /dev/null 2>&1; then
echo "✅ Database: OK"
else
echo "❌ Database: FAILED"
exit 1
fi
echo "Checking MinIO connectivity..."
if ./dp-cli list-buckets > /dev/null 2>&1; then
echo "✅ MinIO: OK"
else
echo "❌ MinIO: FAILED"
exit 1
fi
echo "All services healthy"
```
## Troubleshooting
### Common Issues
**Command Not Found**
```bash
# Make sure script is executable
chmod +x dp-cli
# Use full path
/path/to/da.dp-mcp/dp-cli list-tables
# Or use Python directly
uv run python cli.py list-tables
```
**Connection Errors**
```bash
# Check if services are running
docker-compose ps
# Restart services
docker-compose restart
# Check configuration
uv run python src/dp_mcp/utils/config.py
```
**Permission Errors**
```bash
# Check database permissions
./dp-cli query "SELECT current_user, session_user"
# Check MinIO access
./dp-cli list-buckets
```
**Memory Issues with Large Exports**
```bash
# Use smaller limits
./dp-cli export-csv large_table --limit 1000
# Export in chunks
./dp-cli export-csv large_table --where "id BETWEEN 1 AND 10000" --output chunk1.csv
./dp-cli export-csv large_table --where "id BETWEEN 10001 AND 20000" --output chunk2.csv
```
### Debug Mode
Add debug output to any command:
```bash
# Enable Python debugging
PYTHONPATH=. python -u cli.py list-tables
# Verbose output
./dp-cli query "SELECT * FROM users" 2>&1 | tee debug.log
```
### Performance Tips
1. **Use appropriate limits** for large datasets
2. **Filter early** with WHERE clauses
3. **Export to files** instead of console for large results
4. **Use specific columns** in queries instead of SELECT *
5. **Batch operations** for multiple tables
## Integration with Other Tools
### With curl (API testing)
```bash
# Export data and POST to API
./dp-cli export-csv users --limit 100 | curl -X POST -H "Content-Type: text/csv" --data-binary @- https://api.example.com/import
```
### With jq (JSON processing)
```bash
# If you modify CLI to output JSON
./dp-cli query "SELECT row_to_json(users) FROM users LIMIT 5" | jq '.'
```
### With cron (Scheduled tasks)
```bash
# Add to crontab for daily backups
0 2 * * * cd /path/to/da.dp-mcp && ./dp-cli backup-table users --bucket daily-$(date +\%Y\%m\%d)
```
For more advanced usage and integration examples, see the main [README.md](../README.md) and [API Reference](API_REFERENCE.md).