Determining the onset of driver's preparatory action for take-over in automated driving using multimodal data

被引:2
作者
Teshima, Takaaki [1 ]
Niitsuma, Masahiro [1 ]
Nishimura, Hidekazu [1 ]
机构
[1] Keio Univ, Grad Sch Syst Design & Management, Collaborat Complex,4-1-1 Hiyoshi,Kohoku Ku, Yokohama 2238526, Japan
关键词
Preparatory action; Take-over; Automated driving; Multimodal data fusion; Change-point detection; HUMAN ACTIVITY RECOGNITION; MODEL; TIME; SITUATIONS; DROWSINESS; ATTENTION; VEHICLES; SYSTEM; TASKS;
D O I
10.1016/j.eswa.2024.123153
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automated driving technology has the potential to substantially reduce traffic accidents, a considerable portion of which are caused by human error. Nonetheless, until automated driving systems reach Level 5, which can drive automatically under all road conditions, there will be situations requiring driver intervention. In these situations, drivers engage in actions to prepare for take-over that include shifting their visual attention to the road, placing their hands on the steering wheel, and placing their feet on the pedals. Proper execution of preparatory actions is critical for a safe take-over, and it is crucial to analyze and verify that the actions are properly initiated during the take-over situations. However, when analyzing or verifying preparatory actions for a take-over, manual observation based on video footage is necessary to capture the actions. This manual observation can become a laborious task. Therefore, we propose a method to automatically determine the onset of a driver's preparatory action for a take-over. This method provides a binary signal that indicates the onset of the action, and the signal could serve as an informative marker. For example, the timing of the signal can be used to verify whether a developing Human Machine Interface (HMI) effectively prompts the driver to initiate a preparatory action within the expected time frame. The method utilizes a multimodal fusion model to classify preparatory actions based on driver's upper-body video, seat pressure, and eye potential at the temples. Subsequently, the onset of the preparatory action is determined using a change-point detection technique on the time series of the predicted probabilities resulting from the classification of the preparatory actions. We created a dataset of 300 take-over events collected from 30 subjects and evaluated the method using a 5-fold cross-validation approach. The results demonstrate that the method can classify preparatory actions with an accuracy of 93.9%, and determine the actions' onset with a time error of 0.15 s.
引用
收藏
页数:11
相关论文
共 83 条
  • [1] A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition
    Almaslukh, Bandar
    Artoli, Abdel Monim
    Al-Muhtadi, Jalal
    [J]. SENSORS, 2018, 18 (11)
  • [2] Distracted driver classification using deep learning
    Alotaibi, Munif
    Alotaibi, Bandar
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2020, 14 (03) : 617 - 624
  • [3] Applying deep neural networks for multi-level classification of driver drowsiness using Vehicle-based measures
    Arefnezhad, Sadegh
    Samiee, Sajjad
    Eichberger, Arno
    Fruehwirth, Matthias
    Kaufmann, Clemens
    Klotz, Emma
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2020, 162
  • [4] A cross-disciplinary comparison of multimodal data fusion approaches and applications: Accelerating learning through trans-disciplinary information sharing
    Bokade, Rohit
    Navato, Alfred
    Ouyang, Ruilin
    Jin, Xiaoning
    Chou, Chun-An
    Ostadabbas, Sarah
    Mueller, Amy V.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
  • [5] Ready for Take-Over? A New Driver Assistance System for an Automated Classification of Driver Take-Over Readiness
    Braunagel, Christian
    Rosenstiel, Wolfgang
    Kasneci, Enkelejda
    [J]. IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2017, 9 (04) : 10 - 22
  • [6] A systematic study of the class imbalance problem in convolutional neural networks
    Buda, Mateusz
    Maki, Atsuto
    Mazurowski, Maciej A.
    [J]. NEURAL NETWORKS, 2018, 106 : 249 - 259
  • [7] Eye Movement Analysis for Activity Recognition Using Electrooculography
    Bulling, Andreas
    Ward, Jamie A.
    Gellersen, Hans
    Troester, Gerhard
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (04) : 741 - 753
  • [8] Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
    Cao, Zhe
    Simon, Tomas
    Wei, Shih-En
    Sheikh, Yaser
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1302 - 1310
  • [9] Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
    Carreira, Joao
    Zisserman, Andrew
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4724 - 4733
  • [10] Sensor-based and vision-based human activity recognition: A comprehensive survey
    Dang, L. Minh
    Min, Kyungbok
    Wang, Hanxiang
    Piran, Md. Jalil
    Lee, Cheol Hee
    Moon, Hyeonjoon
    [J]. PATTERN RECOGNITION, 2020, 108 (108)