Architecture design, test and validate image/video processing and machine learning method to detect and understand human’s affective state by analyzing face images captured by an in-vehicle camera. Specifically, it includes:
- Real-time face detection/localization and automatic annotation.
- Dynamic face localization through a sequence of images recorded over time. Spatial-temporal feature extraction to understanding facial expression and motion
- Multi-model fusion to reason and understand human’s behavior and state, especially for the driver drowsiness application.
In this project, the project team at U-M hereafter, in collaboration with DENSO, will create computational methods, in particular deep learning algorithms, to understand and predict human behavior and affective state (distracted/focused, drowsy/active, etc.). The team will implement the above-mentioned algorithms in MATLAB and Python framework.