Enables integration with n8n workflow automation platform, allowing real-time monitoring of YOLO operations through Server-Sent Events (SSE) and providing workflow automation for computer vision tasks like object detection, tracking, and model training.
Transforms Ultralytics' YOLO operations into a RESTful API service, providing programmatic access to training, validation, prediction, export, tracking, and benchmarking of YOLO models for computer vision tasks.
Exposes YOLO's object detection capabilities through a RESTful API, enabling training custom models, performing predictions, tracking objects in videos, and exporting models to different formats.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Ultralytics MCP Serverdetect objects in this surveillance footage and list all found items"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
๐ฏ RCT Detector Platform - Ultralytics MCP Server
Advanced AI-powered object detection platform with intelligent dataset upload, custom model training, and MCP integration for N8N automation.
A comprehensive Model Context Protocol (MCP) server that seamlessly integrates Ultralytics YOLO models with N8N workflows, providing a complete AI-powered computer vision solution with 10GB dataset upload support and intelligent background processing.
โจ Key Features
๐ฏ Core Capabilities
๐ฌ Advanced AI Detection: YOLO-based object detection and analysis
๐ฆ Smart Dataset Upload: 10GB limit with intelligent ZIP structure detection
๐ฏ Custom Model Training: Train your own models with any YOLO dataset
๐ค YOLO11 Model Variants: Choose from nano/small/medium/large/x-large base models
โก GPU Acceleration: NVIDIA CUDA support for fast training/inference
๐ Web Interface: Beautiful Streamlit dashboard
๐ Real-time Monitoring: Live GPU stats and training progress
๐ MCP Integration: Connect with N8N for workflow automation
๐ก๏ธ Background Processing: Stable upload handling for large files
Related MCP server: MCP Image Extractor
๏ฟฝ Quick Start
One-Command Setup
For Windows users:
setup.batFor Linux/Mac users:
chmod +x setup.sh
./setup.shManual setup:
docker-compose up --build -dAccess the Platform
๐ Main Interface: http://localhost:8501
๐ TensorBoard: http://localhost:6006
๐ MCP Server: http://localhost:8092
๐ Jupyter: http://localhost:8888
๏ฟฝ Requirements
Docker & Docker Compose
NVIDIA Docker Runtime (for GPU support)
8GB+ RAM recommended
50GB+ free disk space
๐ฏ Dataset Upload
Supported ZIP Structures
The platform automatically detects and organizes various ZIP structures:
โ
Structure 1 (Flat):
dataset.zip
โโโ data.yaml
โโโ images/
โ โโโ img1.jpg
โ โโโ img2.jpg
โโโ labels/
โโโ img1.txt
โโโ img2.txt
โ
Structure 2 (Nested):
dataset.zip
โโโ my_dataset/
โโโ data.yaml
โโโ images/
โ โโโ train/
โ โโโ val/
โโโ labels/
โโโ train/
โโโ val/
โ
Structure 3 (Split folders):
dataset.zip
โโโ data.yaml
โโโ train/
โ โโโ images/
โ โโโ labels/
โโโ val/
โโโ images/
โโโ labels/Upload Process
Navigate to Training page
Click Upload Custom Dataset
Select your ZIP file (up to 10GB)
Enter dataset name
Click Upload Dataset
Do NOT refresh during processing
Wait for completion message
๐ Streamlit UI: http://localhost:8501
๐ TensorBoard: http://localhost:6006
๐ Jupyter Lab: http://localhost:8888
๐ MCP Server: http://localhost:8092
๐ฎ Available Services
Service | Port | Description | Status |
Streamlit Dashboard | 8501 | Interactive YOLO model interface | โ Ready |
MCP Server | 8092 | N8N integration endpoint | โ Ready |
TensorBoard | 6006 | Training metrics visualization | โ Ready |
Jupyter Lab | 8888 | Development environment | โ Ready |
๐ ๏ธ MCP Tools Available
Our MCP server provides 7 specialized tools for AI workflows:
detect_objects- Real-time object detection in imagestrain_model- Custom YOLO model trainingevaluate_model- Model performance assessmentpredict_batch- Batch processing for multiple imagesexport_model- Model format conversion (ONNX, TensorRT, etc.)benchmark_model- Performance benchmarkinganalyze_dataset- Dataset statistics and validation
๐ N8N Integration
Connect to N8N using our MCP server:
Server Endpoint:
http://localhost:8092Transport: Server-Sent Events (SSE)
Health Check:
http://localhost:8092/health
Example N8N Workflow
{
"mcp_connection": {
"transport": "sse",
"endpoint": "http://localhost:8092/sse"
}
}๐ Project Structure
ultralytics_mcp_server/
โโโ ๐ณ docker-compose.yml # Orchestration configuration
โโโ ๐ง Dockerfile.ultralytics # CUDA-enabled Ultralytics container
โโโ ๐ง Dockerfile.mcp-connector # Node.js MCP server container
โโโ ๐ฆ src/
โ โโโ server.js # MCP server implementation
โโโ ๐จ main_dashboard.py # Streamlit main interface
โโโ ๐ pages/ # Streamlit multi-page app
โ โโโ train.py # Model training interface
โ โโโ inference.py # Inference interface
โโโ โก startup.sh # Container initialization script
โโโ ๐ .dockerignore # Build optimization
โโโ ๐ README.md # This documentation๐ง Configuration
Environment Variables
CUDA_VISIBLE_DEVICES- GPU device selectionSTREAMLIT_PORT- Streamlit service port (default: 8501)MCP_PORT- MCP server port (default: 8092)TENSORBOARD_PORT- TensorBoard port (default: 6006)
Custom Configuration
Edit docker-compose.yml to customize:
Port mappings
Volume mounts
Environment variables
Resource limits
๐ Usage Examples
Object Detection via Streamlit
Navigate to http://localhost:8501
Upload an image or video
Select YOLO model variant and confidence threshold
Run inference and view annotated results
Training Custom Models with YOLO11 Variants
Go to Training page in Streamlit
Upload custom dataset or select built-in datasets
Choose YOLO11 Model Variant:
yolo11n: Fast training, good for testing (1.9M parameters)
yolo11s: Balanced performance (9.1M parameters)
yolo11m: Better accuracy (20.1M parameters)
yolo11l: High accuracy training (25.3M parameters)
yolo11x: Maximum accuracy (43.9M parameters)
Configure epochs, batch size, image size
Monitor real-time training progress with live GPU stats
Models automatically save to workspace
Training Custom Models
Access Training page in Streamlit interface
Select YOLO11 Model Variant (nano/small/medium/large/x-large)
Choose your dataset (built-in or custom upload)
Configure training parameters (epochs, batch size, image size)
Click Start Training and monitor progress
Models auto-save to workspace for later use
Model Variant Selection:
yolo11n.pt - Nano: Fastest, lowest accuracy (1.9M params)
yolo11s.pt - Small: Good balance (9.1M params)
yolo11m.pt - Medium: Better accuracy (20.1M params)
yolo11l.pt - Large: High accuracy (25.3M params)
yolo11x.pt - X-Large: Highest accuracy (43.9M params)
N8N Automation
Create N8N workflow
Add MCP connector node
Configure endpoint:
http://localhost:8092Use available tools for automation
๐ Monitoring & Debugging
Container Status
docker ps
docker-compose logs ultralytics-container
docker-compose logs mcp-connector-containerHealth Checks
# MCP Server
curl http://localhost:8092/health
# Streamlit
curl http://localhost:8501/_stcore/health
# TensorBoard
curl http://localhost:6006๐ Restart & Maintenance
Restart Services
docker-compose restartUpdate & Rebuild
docker-compose down
docker-compose up --build -dClean Reset
docker-compose down
docker system prune -f
docker-compose up --build -d๐ฏ Performance Optimization
GPU Memory: Automatically managed by CUDA runtime
Batch Processing: Optimized for multiple image inference
Model Caching: Pre-loaded models for faster response
Multi-threading: Concurrent request handling
๐จ Troubleshooting
Common Issues
Container Restart Loop
# Check logs
docker-compose logs ultralytics-container
# Restart with rebuild
docker-compose down
docker-compose up --build -dStreamlit Not Loading
# Verify container status
docker ps
# Check if files are copied correctly
docker exec ultralytics-container ls -la /ultralytics/GPU Not Detected
# Check NVIDIA drivers
nvidia-smi
# Verify CUDA in container
docker exec ultralytics-container nvidia-smi๐ง Development
Local Development Setup
Clone repository
Install dependencies:
npm install(for MCP server)Set up Python environment for Streamlit
Run services individually for debugging
Adding New MCP Tools
Edit
src/server.jsAdd tool definition in
toolsarrayImplement handler in
handleToolCallTest with N8N integration
๐ค Contributing
Fork the repository
Create feature branch (
git checkout -b feature/amazing-feature)Commit changes (
git commit -m 'Add amazing feature')Push to branch (
git push origin feature/amazing-feature)Open Pull Request
๐ License
This project is licensed under the AGPL-3.0 License - see the Ultralytics License for details.
๐ Acknowledgments
Ultralytics - For the amazing YOLO implementation
N8N - For the workflow automation platform
Streamlit - For the beautiful web interface framework
NVIDIA - For CUDA support and GPU acceleration
๐ Support
๐ Issues: GitHub Issues
๐ฌ Discussions: GitHub Discussions
๐ง Contact: Create an issue for support
Made with โค๏ธ for the AI Community
๐ Ready to revolutionize your computer vision workflows? Start with
This server cannot be installed
Resources
Looking for Admin?
Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.