Skip to main content
Glama
README.mdโ€ข10.3 kB
# ๐ŸŽฏ RCT Detector Platform - Ultralytics MCP Server > **Advanced AI-powered object detection platform with intelligent dataset upload, custom model training, and MCP integration for N8N automation.** A comprehensive Model Context Protocol (MCP) server that seamlessly integrates Ultralytics YOLO models with N8N workflows, providing a complete AI-powered computer vision solution with 10GB dataset upload support and intelligent background processing. [![Docker](https://img.shields.io/badge/Docker-Ready-blue.svg)](https://www.docker.com/) [![CUDA](https://img.shields.io/badge/CUDA-12.4.1-green.svg)](https://developer.nvidia.com/cuda-downloads) [![Streamlit](https://img.shields.io/badge/Streamlit-UI-red.svg)](https://streamlit.io/) [![N8N](https://img.shields.io/badge/N8N-Integration-orange.svg)](https://n8n.io/) ## โœจ Key Features ### ๐ŸŽฏ Core Capabilities - **๐Ÿ”ฌ Advanced AI Detection**: YOLO-based object detection and analysis - **๐Ÿ“ฆ Smart Dataset Upload**: 10GB limit with intelligent ZIP structure detection - **๐ŸŽฏ Custom Model Training**: Train your own models with any YOLO dataset - **๐Ÿค– YOLO11 Model Variants**: Choose from nano/small/medium/large/x-large base models - **โšก GPU Acceleration**: NVIDIA CUDA support for fast training/inference - **๐ŸŒ Web Interface**: Beautiful Streamlit dashboard - **๐Ÿ“Š Real-time Monitoring**: Live GPU stats and training progress - **๐Ÿ”Œ MCP Integration**: Connect with N8N for workflow automation - **๐Ÿ›ก๏ธ Background Processing**: Stable upload handling for large files ## ๏ฟฝ Quick Start ### One-Command Setup **For Windows users:** ```bash setup.bat ``` **For Linux/Mac users:** ```bash chmod +x setup.sh ./setup.sh ``` **Manual setup:** ```bash docker-compose up --build -d ``` ### Access the Platform - **๐ŸŒ Main Interface**: http://localhost:8501 - **๐Ÿ“Š TensorBoard**: http://localhost:6006 - **๐Ÿ”Œ MCP Server**: http://localhost:8092 - **๐Ÿ““ Jupyter**: http://localhost:8888 ## ๏ฟฝ Requirements - **Docker & Docker Compose** - **NVIDIA Docker Runtime** (for GPU support) - **8GB+ RAM** recommended - **50GB+ free disk space** ## ๐ŸŽฏ Dataset Upload ### Supported ZIP Structures The platform automatically detects and organizes various ZIP structures: ``` โœ… Structure 1 (Flat): dataset.zip โ”œโ”€โ”€ data.yaml โ”œโ”€โ”€ images/ โ”‚ โ”œโ”€โ”€ img1.jpg โ”‚ โ””โ”€โ”€ img2.jpg โ””โ”€โ”€ labels/ โ”œโ”€โ”€ img1.txt โ””โ”€โ”€ img2.txt โœ… Structure 2 (Nested): dataset.zip โ””โ”€โ”€ my_dataset/ โ”œโ”€โ”€ data.yaml โ”œโ”€โ”€ images/ โ”‚ โ”œโ”€โ”€ train/ โ”‚ โ””โ”€โ”€ val/ โ””โ”€โ”€ labels/ โ”œโ”€โ”€ train/ โ””โ”€โ”€ val/ โœ… Structure 3 (Split folders): dataset.zip โ”œโ”€โ”€ data.yaml โ”œโ”€โ”€ train/ โ”‚ โ”œโ”€โ”€ images/ โ”‚ โ””โ”€โ”€ labels/ โ””โ”€โ”€ val/ โ”œโ”€โ”€ images/ โ””โ”€โ”€ labels/ ``` ### Upload Process 1. Navigate to **Training** page 2. Click **Upload Custom Dataset** 3. Select your ZIP file (up to 10GB) 4. Enter dataset name 5. Click **Upload Dataset** 6. **Do NOT refresh** during processing 7. Wait for completion message - ๐ŸŒ **Streamlit UI**: http://localhost:8501 - ๐Ÿ“Š **TensorBoard**: http://localhost:6006 - ๐Ÿ““ **Jupyter Lab**: http://localhost:8888 - ๐Ÿ”— **MCP Server**: http://localhost:8092 ## ๐ŸŽฎ Available Services | Service | Port | Description | Status | |---------|------|-------------|--------| | Streamlit Dashboard | 8501 | Interactive YOLO model interface | โœ… Ready | | MCP Server | 8092 | N8N integration endpoint | โœ… Ready | | TensorBoard | 6006 | Training metrics visualization | โœ… Ready | | Jupyter Lab | 8888 | Development environment | โœ… Ready | ## ๐Ÿ› ๏ธ MCP Tools Available Our MCP server provides 7 specialized tools for AI workflows: 1. **`detect_objects`** - Real-time object detection in images 2. **`train_model`** - Custom YOLO model training 3. **`evaluate_model`** - Model performance assessment 4. **`predict_batch`** - Batch processing for multiple images 5. **`export_model`** - Model format conversion (ONNX, TensorRT, etc.) 6. **`benchmark_model`** - Performance benchmarking 7. **`analyze_dataset`** - Dataset statistics and validation ## ๐Ÿ”Œ N8N Integration Connect to N8N using our MCP server: 1. **Server Endpoint**: `http://localhost:8092` 2. **Transport**: Server-Sent Events (SSE) 3. **Health Check**: `http://localhost:8092/health` ### Example N8N Workflow ```json { "mcp_connection": { "transport": "sse", "endpoint": "http://localhost:8092/sse" } } ``` ## ๐Ÿ“ Project Structure ``` ultralytics_mcp_server/ โ”œโ”€โ”€ ๐Ÿณ docker-compose.yml # Orchestration configuration โ”œโ”€โ”€ ๐Ÿ”ง Dockerfile.ultralytics # CUDA-enabled Ultralytics container โ”œโ”€โ”€ ๐Ÿ”ง Dockerfile.mcp-connector # Node.js MCP server container โ”œโ”€โ”€ ๐Ÿ“ฆ src/ โ”‚ โ””โ”€โ”€ server.js # MCP server implementation โ”œโ”€โ”€ ๐ŸŽจ main_dashboard.py # Streamlit main interface โ”œโ”€โ”€ ๐Ÿ“„ pages/ # Streamlit multi-page app โ”‚ โ”œโ”€โ”€ train.py # Model training interface โ”‚ โ””โ”€โ”€ inference.py # Inference interface โ”œโ”€โ”€ โšก startup.sh # Container initialization script โ”œโ”€โ”€ ๐Ÿ“‹ .dockerignore # Build optimization โ””โ”€โ”€ ๐Ÿ“– README.md # This documentation ``` ## ๐Ÿ”ง Configuration ### Environment Variables - `CUDA_VISIBLE_DEVICES` - GPU device selection - `STREAMLIT_PORT` - Streamlit service port (default: 8501) - `MCP_PORT` - MCP server port (default: 8092) - `TENSORBOARD_PORT` - TensorBoard port (default: 6006) ### Custom Configuration Edit `docker-compose.yml` to customize: - Port mappings - Volume mounts - Environment variables - Resource limits ## ๐Ÿ“Š Usage Examples ### Object Detection via Streamlit 1. Navigate to http://localhost:8501 2. Upload an image or video 3. Select YOLO model variant and confidence threshold 4. Run inference and view annotated results ### Training Custom Models with YOLO11 Variants 1. Go to **Training** page in Streamlit 2. Upload custom dataset or select built-in datasets 3. Choose **YOLO11 Model Variant**: - **yolo11n**: Fast training, good for testing (1.9M parameters) - **yolo11s**: Balanced performance (9.1M parameters) - **yolo11m**: Better accuracy (20.1M parameters) - **yolo11l**: High accuracy training (25.3M parameters) - **yolo11x**: Maximum accuracy (43.9M parameters) 4. Configure epochs, batch size, image size 5. Monitor real-time training progress with live GPU stats 6. Models automatically save to workspace ### Training Custom Models 1. Access Training page in Streamlit interface 2. Select **YOLO11 Model Variant** (nano/small/medium/large/x-large) 3. Choose your dataset (built-in or custom upload) 4. Configure training parameters (epochs, batch size, image size) 5. Click **Start Training** and monitor progress 6. Models auto-save to workspace for later use **Model Variant Selection:** - **yolo11n.pt** - Nano: Fastest, lowest accuracy (1.9M params) - **yolo11s.pt** - Small: Good balance (9.1M params) - **yolo11m.pt** - Medium: Better accuracy (20.1M params) - **yolo11l.pt** - Large: High accuracy (25.3M params) - **yolo11x.pt** - X-Large: Highest accuracy (43.9M params) ### N8N Automation 1. Create N8N workflow 2. Add MCP connector node 3. Configure endpoint: `http://localhost:8092` 4. Use available tools for automation ## ๐Ÿ” Monitoring & Debugging ### Container Status ```bash docker ps docker-compose logs ultralytics-container docker-compose logs mcp-connector-container ``` ### Health Checks ```bash # MCP Server curl http://localhost:8092/health # Streamlit curl http://localhost:8501/_stcore/health # TensorBoard curl http://localhost:6006 ``` ## ๐Ÿ”„ Restart & Maintenance ### Restart Services ```bash docker-compose restart ``` ### Update & Rebuild ```bash docker-compose down docker-compose up --build -d ``` ### Clean Reset ```bash docker-compose down docker system prune -f docker-compose up --build -d ``` ## ๐ŸŽฏ Performance Optimization - **GPU Memory**: Automatically managed by CUDA runtime - **Batch Processing**: Optimized for multiple image inference - **Model Caching**: Pre-loaded models for faster response - **Multi-threading**: Concurrent request handling ## ๐Ÿšจ Troubleshooting ### Common Issues **Container Restart Loop** ```bash # Check logs docker-compose logs ultralytics-container # Restart with rebuild docker-compose down docker-compose up --build -d ``` **Streamlit Not Loading** ```bash # Verify container status docker ps # Check if files are copied correctly docker exec ultralytics-container ls -la /ultralytics/ ``` **GPU Not Detected** ```bash # Check NVIDIA drivers nvidia-smi # Verify CUDA in container docker exec ultralytics-container nvidia-smi ``` ## ๐Ÿ”ง Development ### Local Development Setup 1. Clone repository 2. Install dependencies: `npm install` (for MCP server) 3. Set up Python environment for Streamlit 4. Run services individually for debugging ### Adding New MCP Tools 1. Edit `src/server.js` 2. Add tool definition in `tools` array 3. Implement handler in `handleToolCall` 4. Test with N8N integration ## ๐Ÿค Contributing 1. Fork the repository 2. Create feature branch (`git checkout -b feature/amazing-feature`) 3. Commit changes (`git commit -m 'Add amazing feature'`) 4. Push to branch (`git push origin feature/amazing-feature`) 5. Open Pull Request ## ๐Ÿ“„ License This project is licensed under the AGPL-3.0 License - see the [Ultralytics License](https://ultralytics.com/license) for details. ## ๐Ÿ™ Acknowledgments - **Ultralytics** - For the amazing YOLO implementation - **N8N** - For the workflow automation platform - **Streamlit** - For the beautiful web interface framework - **NVIDIA** - For CUDA support and GPU acceleration ## ๐Ÿ“ž Support - ๐Ÿ› **Issues**: [GitHub Issues](https://github.com/MetehanYasar11/ultralytics_mcp_server/issues) - ๐Ÿ’ฌ **Discussions**: [GitHub Discussions](https://github.com/MetehanYasar11/ultralytics_mcp_server/discussions) - ๐Ÿ“ง **Contact**: Create an issue for support --- **Made with โค๏ธ for the AI Community** > ๐Ÿš€ **Ready to revolutionize your computer vision workflows? Start with `docker-compose up -d`!**

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MetehanYasar11/ultralytics_mcp_server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server