Configure what a screen should sense using natural language. Generates and optionally pushes a sensing profile to the device.
Uses Gemini AI to interpret a natural language sensing intent and generate
a sensing profile that maps to available on-device ML models (BlazeFace,
AgeGender, FER+, MoveNet, YAMNet, WhisperTiny, EfficientDet, YOLOv8-nano).
WHEN TO USE:
- Setting up a new screen to sense specific things (faces, vehicles, emotions, etc.)
- Changing what a screen detects based on venue type or business needs
- Configuring custom sensing for special events or campaigns
- Translating business intent into ML model configuration
RETURNS:
- data: The generated sensing profile with:
- profile_name, profile_type, description
- models: Array of ML model IDs to activate
- classes: COCO classes to detect (for object detection models)
- thresholds: Confidence and alert thresholds
- observation_families: What types of observations will be produced
- capture_interval_ms, report_interval_ms: Timing configuration
- estimated_fps_impact: CPU cost estimate
- data_fields_produced: All data fields the profile will generate
- reasoning: Why these models/classes were chosen
- deployment_status: 'generated' | 'pushed' | 'push_failed'
- metadata: { screen_id, auto_deploy, profile_id }
- suggested_next_queries: Follow-up actions
EXAMPLE:
User: "Set up the lobby screen to detect foot traffic and emotions"
configure_sensing({
screen_id: "507f1f77bcf86cd799439011",
intent: "Detect foot traffic patterns, count people, and measure emotional reactions to displayed content",
auto_deploy: false
})
User: "Configure this drive-through screen for vehicle counting"
configure_sensing({
screen_id: "507f1f77bcf86cd799439011",
intent: "Count vehicles in drive-through lane, detect vehicle types, measure queue length",
auto_deploy: true
})