Provides integration for fine-tuning and training specialized YOLOv8 object detection models using the Ultralytics framework.
Enables searching for and downloading training images directly from Unsplash to create custom computer vision datasets.
Allows for the configuration, labeling, and training of custom YOLOv8 object detection models through an automated MCP workflow.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@SensorMCP ServerFetch 50 images of red pandas from Unsplash and train a YOLOv8 model to detect them."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
SensorMCP Server
A SensorMCP Model Context Protocol (MCP) Server that enables automated dataset creation and custom object detection model training through natural language interactions. This project integrates computer vision capabilities with Large Language Models using the MCP standard.
š About
SensorMCP Server combines the power of foundation models (like GroundedSAM) with custom model training (YOLOv8) to create a seamless workflow for object detection. Using the Model Context Protocol, it enables LLMs to:
Automatically label images using foundation models
Create custom object detection datasets
Train specialized detection models
Download images from Unsplash for training data
The Model Context Protocol (MCP) enables seamless integration between LLMs and external tools, making this ideal for AI-powered computer vision workflows.
⨠Features
Foundation Model Integration: Uses GroundedSAM for automatic image labeling
Custom Model Training: Fine-tune YOLOv8 models on your specific objects
Image Data Management: Download images from Unsplash or import local images
Ontology Definition: Define custom object classes through natural language
MCP Protocol: Native integration with LLM workflows and chat interfaces
Fixed Data Structure: Organized directory layout for reproducible workflows
š ļø Installation
Prerequisites
uv for package management
Python 3.13+ (
uv python install 3.13)CUDA-compatible GPU (recommended for training)
Setup
Clone the repository:
Install dependencies:
Set up environment variables (create
.envfile):
š Usage
Running the MCP Server
For MCP integration (recommended):
For standalone web server:
MCP Configuration
Add to your MCP client configuration:
Available MCP Tools
list_available_models() - View supported base and target models
define_ontology(objects_list) - Define object classes to detect
set_base_model(model_name) - Initialize foundation model for labeling
set_target_model(model_name) - Initialize target model for training
fetch_unsplash_images(query, max_images) - Download training images
import_images_from_folder(folder_path) - Import local images
label_images() - Auto-label images using the base model
train_model(epochs, device) - Train custom detection model
Example Workflow
Through your MCP-enabled LLM interface:
Define what to detect:
Define ontology for "tiger, elephant, zebra"Set up models:
Set base model to grounded_sam Set target model to yolov8n.ptGet training data:
Fetch 50 images from Unsplash for "wildlife animals"Create dataset:
Label all images using the base modelTrain custom model:
Train model for 100 epochs on device 0
š Project Structure
š§ Supported Models
Base Models (for auto-labeling)
GroundedSAM: Foundation model for object detection and segmentation
Target Models (for training)
YOLOv8n.pt: Nano - fastest inference
YOLOv8s.pt: Small - balanced speed/accuracy
YOLOv8m.pt: Medium - higher accuracy
YOLOv8l.pt: Large - high accuracy
YOLOv8x.pt: Extra Large - highest accuracy
š API Integration
Unsplash API
To use image download functionality:
Create an account at Unsplash Developers
Create a new application
Add your access key to the
.envfile
š ļø Development
Running Tests
Code Formatting
š Requirements
See pyproject.toml for full dependency list. Key dependencies:
mcp[cli]- Model Context Protocolautodistill- Foundation model integrationtorch&torchvision- Deep learning frameworkultralytics- YOLOv8 implementation
š¤ Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests for new functionality
Submit a pull request
š Citation
If you use this code or data in your research, please cite our paper:
š License
This project is licensed under the MIT License.
š§ Contact
For questions about the zoo dataset mentioned in development: Email: yq@anysign.net