# ๐ฏ RCT Detector Platform - Ultralytics MCP Server
> **Advanced AI-powered object detection platform with intelligent dataset upload, custom model training, and MCP integration for N8N automation.**
A comprehensive Model Context Protocol (MCP) server that seamlessly integrates Ultralytics YOLO models with N8N workflows, providing a complete AI-powered computer vision solution with 10GB dataset upload support and intelligent background processing.
[](https://www.docker.com/)
[](https://developer.nvidia.com/cuda-downloads)
[](https://streamlit.io/)
[](https://n8n.io/)
## โจ Key Features
### ๐ฏ Core Capabilities
- **๐ฌ Advanced AI Detection**: YOLO-based object detection and analysis
- **๐ฆ Smart Dataset Upload**: 10GB limit with intelligent ZIP structure detection
- **๐ฏ Custom Model Training**: Train your own models with any YOLO dataset
- **๐ค YOLO11 Model Variants**: Choose from nano/small/medium/large/x-large base models
- **โก GPU Acceleration**: NVIDIA CUDA support for fast training/inference
- **๐ Web Interface**: Beautiful Streamlit dashboard
- **๐ Real-time Monitoring**: Live GPU stats and training progress
- **๐ MCP Integration**: Connect with N8N for workflow automation
- **๐ก๏ธ Background Processing**: Stable upload handling for large files
## ๏ฟฝ Quick Start
### One-Command Setup
**For Windows users:**
```bash
setup.bat
```
**For Linux/Mac users:**
```bash
chmod +x setup.sh
./setup.sh
```
**Manual setup:**
```bash
docker-compose up --build -d
```
### Access the Platform
- **๐ Main Interface**: http://localhost:8501
- **๐ TensorBoard**: http://localhost:6006
- **๐ MCP Server**: http://localhost:8092
- **๐ Jupyter**: http://localhost:8888
## ๏ฟฝ Requirements
- **Docker & Docker Compose**
- **NVIDIA Docker Runtime** (for GPU support)
- **8GB+ RAM** recommended
- **50GB+ free disk space**
## ๐ฏ Dataset Upload
### Supported ZIP Structures
The platform automatically detects and organizes various ZIP structures:
```
โ
Structure 1 (Flat):
dataset.zip
โโโ data.yaml
โโโ images/
โ โโโ img1.jpg
โ โโโ img2.jpg
โโโ labels/
โโโ img1.txt
โโโ img2.txt
โ
Structure 2 (Nested):
dataset.zip
โโโ my_dataset/
โโโ data.yaml
โโโ images/
โ โโโ train/
โ โโโ val/
โโโ labels/
โโโ train/
โโโ val/
โ
Structure 3 (Split folders):
dataset.zip
โโโ data.yaml
โโโ train/
โ โโโ images/
โ โโโ labels/
โโโ val/
โโโ images/
โโโ labels/
```
### Upload Process
1. Navigate to **Training** page
2. Click **Upload Custom Dataset**
3. Select your ZIP file (up to 10GB)
4. Enter dataset name
5. Click **Upload Dataset**
6. **Do NOT refresh** during processing
7. Wait for completion message
- ๐ **Streamlit UI**: http://localhost:8501
- ๐ **TensorBoard**: http://localhost:6006
- ๐ **Jupyter Lab**: http://localhost:8888
- ๐ **MCP Server**: http://localhost:8092
## ๐ฎ Available Services
| Service | Port | Description | Status |
|---------|------|-------------|--------|
| Streamlit Dashboard | 8501 | Interactive YOLO model interface | โ
Ready |
| MCP Server | 8092 | N8N integration endpoint | โ
Ready |
| TensorBoard | 6006 | Training metrics visualization | โ
Ready |
| Jupyter Lab | 8888 | Development environment | โ
Ready |
## ๐ ๏ธ MCP Tools Available
Our MCP server provides 7 specialized tools for AI workflows:
1. **`detect_objects`** - Real-time object detection in images
2. **`train_model`** - Custom YOLO model training
3. **`evaluate_model`** - Model performance assessment
4. **`predict_batch`** - Batch processing for multiple images
5. **`export_model`** - Model format conversion (ONNX, TensorRT, etc.)
6. **`benchmark_model`** - Performance benchmarking
7. **`analyze_dataset`** - Dataset statistics and validation
## ๐ N8N Integration
Connect to N8N using our MCP server:
1. **Server Endpoint**: `http://localhost:8092`
2. **Transport**: Server-Sent Events (SSE)
3. **Health Check**: `http://localhost:8092/health`
### Example N8N Workflow
```json
{
"mcp_connection": {
"transport": "sse",
"endpoint": "http://localhost:8092/sse"
}
}
```
## ๐ Project Structure
```
ultralytics_mcp_server/
โโโ ๐ณ docker-compose.yml # Orchestration configuration
โโโ ๐ง Dockerfile.ultralytics # CUDA-enabled Ultralytics container
โโโ ๐ง Dockerfile.mcp-connector # Node.js MCP server container
โโโ ๐ฆ src/
โ โโโ server.js # MCP server implementation
โโโ ๐จ main_dashboard.py # Streamlit main interface
โโโ ๐ pages/ # Streamlit multi-page app
โ โโโ train.py # Model training interface
โ โโโ inference.py # Inference interface
โโโ โก startup.sh # Container initialization script
โโโ ๐ .dockerignore # Build optimization
โโโ ๐ README.md # This documentation
```
## ๐ง Configuration
### Environment Variables
- `CUDA_VISIBLE_DEVICES` - GPU device selection
- `STREAMLIT_PORT` - Streamlit service port (default: 8501)
- `MCP_PORT` - MCP server port (default: 8092)
- `TENSORBOARD_PORT` - TensorBoard port (default: 6006)
### Custom Configuration
Edit `docker-compose.yml` to customize:
- Port mappings
- Volume mounts
- Environment variables
- Resource limits
## ๐ Usage Examples
### Object Detection via Streamlit
1. Navigate to http://localhost:8501
2. Upload an image or video
3. Select YOLO model variant and confidence threshold
4. Run inference and view annotated results
### Training Custom Models with YOLO11 Variants
1. Go to **Training** page in Streamlit
2. Upload custom dataset or select built-in datasets
3. Choose **YOLO11 Model Variant**:
- **yolo11n**: Fast training, good for testing (1.9M parameters)
- **yolo11s**: Balanced performance (9.1M parameters)
- **yolo11m**: Better accuracy (20.1M parameters)
- **yolo11l**: High accuracy training (25.3M parameters)
- **yolo11x**: Maximum accuracy (43.9M parameters)
4. Configure epochs, batch size, image size
5. Monitor real-time training progress with live GPU stats
6. Models automatically save to workspace
### Training Custom Models
1. Access Training page in Streamlit interface
2. Select **YOLO11 Model Variant** (nano/small/medium/large/x-large)
3. Choose your dataset (built-in or custom upload)
4. Configure training parameters (epochs, batch size, image size)
5. Click **Start Training** and monitor progress
6. Models auto-save to workspace for later use
**Model Variant Selection:**
- **yolo11n.pt** - Nano: Fastest, lowest accuracy (1.9M params)
- **yolo11s.pt** - Small: Good balance (9.1M params)
- **yolo11m.pt** - Medium: Better accuracy (20.1M params)
- **yolo11l.pt** - Large: High accuracy (25.3M params)
- **yolo11x.pt** - X-Large: Highest accuracy (43.9M params)
### N8N Automation
1. Create N8N workflow
2. Add MCP connector node
3. Configure endpoint: `http://localhost:8092`
4. Use available tools for automation
## ๐ Monitoring & Debugging
### Container Status
```bash
docker ps
docker-compose logs ultralytics-container
docker-compose logs mcp-connector-container
```
### Health Checks
```bash
# MCP Server
curl http://localhost:8092/health
# Streamlit
curl http://localhost:8501/_stcore/health
# TensorBoard
curl http://localhost:6006
```
## ๐ Restart & Maintenance
### Restart Services
```bash
docker-compose restart
```
### Update & Rebuild
```bash
docker-compose down
docker-compose up --build -d
```
### Clean Reset
```bash
docker-compose down
docker system prune -f
docker-compose up --build -d
```
## ๐ฏ Performance Optimization
- **GPU Memory**: Automatically managed by CUDA runtime
- **Batch Processing**: Optimized for multiple image inference
- **Model Caching**: Pre-loaded models for faster response
- **Multi-threading**: Concurrent request handling
## ๐จ Troubleshooting
### Common Issues
**Container Restart Loop**
```bash
# Check logs
docker-compose logs ultralytics-container
# Restart with rebuild
docker-compose down
docker-compose up --build -d
```
**Streamlit Not Loading**
```bash
# Verify container status
docker ps
# Check if files are copied correctly
docker exec ultralytics-container ls -la /ultralytics/
```
**GPU Not Detected**
```bash
# Check NVIDIA drivers
nvidia-smi
# Verify CUDA in container
docker exec ultralytics-container nvidia-smi
```
## ๐ง Development
### Local Development Setup
1. Clone repository
2. Install dependencies: `npm install` (for MCP server)
3. Set up Python environment for Streamlit
4. Run services individually for debugging
### Adding New MCP Tools
1. Edit `src/server.js`
2. Add tool definition in `tools` array
3. Implement handler in `handleToolCall`
4. Test with N8N integration
## ๐ค Contributing
1. Fork the repository
2. Create feature branch (`git checkout -b feature/amazing-feature`)
3. Commit changes (`git commit -m 'Add amazing feature'`)
4. Push to branch (`git push origin feature/amazing-feature`)
5. Open Pull Request
## ๐ License
This project is licensed under the AGPL-3.0 License - see the [Ultralytics License](https://ultralytics.com/license) for details.
## ๐ Acknowledgments
- **Ultralytics** - For the amazing YOLO implementation
- **N8N** - For the workflow automation platform
- **Streamlit** - For the beautiful web interface framework
- **NVIDIA** - For CUDA support and GPU acceleration
## ๐ Support
- ๐ **Issues**: [GitHub Issues](https://github.com/MetehanYasar11/ultralytics_mcp_server/issues)
- ๐ฌ **Discussions**: [GitHub Discussions](https://github.com/MetehanYasar11/ultralytics_mcp_server/discussions)
- ๐ง **Contact**: Create an issue for support
---
**Made with โค๏ธ for the AI Community**
> ๐ **Ready to revolutionize your computer vision workflows? Start with `docker-compose up -d`!**