Determining the onset of driver's preparatory action for take-over in automated driving using multimodal data

被引:2
作者
Teshima, Takaaki [1 ]
Niitsuma, Masahiro [1 ]
Nishimura, Hidekazu [1 ]
机构
[1] Keio Univ, Grad Sch Syst Design & Management, Collaborat Complex,4-1-1 Hiyoshi,Kohoku Ku, Yokohama 2238526, Japan
关键词
Preparatory action; Take-over; Automated driving; Multimodal data fusion; Change-point detection; HUMAN ACTIVITY RECOGNITION; MODEL; TIME; SITUATIONS; DROWSINESS; ATTENTION; VEHICLES; SYSTEM; TASKS;
D O I
10.1016/j.eswa.2024.123153
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automated driving technology has the potential to substantially reduce traffic accidents, a considerable portion of which are caused by human error. Nonetheless, until automated driving systems reach Level 5, which can drive automatically under all road conditions, there will be situations requiring driver intervention. In these situations, drivers engage in actions to prepare for take-over that include shifting their visual attention to the road, placing their hands on the steering wheel, and placing their feet on the pedals. Proper execution of preparatory actions is critical for a safe take-over, and it is crucial to analyze and verify that the actions are properly initiated during the take-over situations. However, when analyzing or verifying preparatory actions for a take-over, manual observation based on video footage is necessary to capture the actions. This manual observation can become a laborious task. Therefore, we propose a method to automatically determine the onset of a driver's preparatory action for a take-over. This method provides a binary signal that indicates the onset of the action, and the signal could serve as an informative marker. For example, the timing of the signal can be used to verify whether a developing Human Machine Interface (HMI) effectively prompts the driver to initiate a preparatory action within the expected time frame. The method utilizes a multimodal fusion model to classify preparatory actions based on driver's upper-body video, seat pressure, and eye potential at the temples. Subsequently, the onset of the preparatory action is determined using a change-point detection technique on the time series of the predicted probabilities resulting from the classification of the preparatory actions. We created a dataset of 300 take-over events collected from 30 subjects and evaluated the method using a 5-fold cross-validation approach. The results demonstrate that the method can classify preparatory actions with an accuracy of 93.9%, and determine the actions' onset with a time error of 0.15 s.
引用
收藏
页数:11
相关论文
共 83 条
  • [11] Looking at the Driver/Rider in Autonomous Vehicles to Predict Take-Over Readiness
    Deo, Nachiket
    Trivedi, Mohan M.
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2020, 5 (01): : 41 - 52
  • [12] Ding M, 2017, 2017 IEEE INTERNATIONAL CONFERENCE ON CYBORG AND BIONIC SYSTEMS (CBS), P215, DOI 10.1109/CBS.2017.8266102
  • [13] Dosovitskiy A., 2017, P 1 ANN C ROB LEARN, V78, P1, DOI DOI 10.48550/ARXIV.1711.03938
  • [14] Learning Spatiotemporal Features with 3D Convolutional Networks
    Du Tran
    Bourdev, Lubomir
    Fergus, Rob
    Torresani, Lorenzo
    Paluri, Manohar
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 4489 - 4497
  • [15] Robust Human Activity Recognition Using Multimodel Feature-Level Fusion
    Ehatisham-Ul-Haq, Muhammad
    Javed, Ali
    Azam, Muhammad Awais
    Malik, Hafiz M. A.
    Irtaza, Aun
    Lee, Ik Hyun
    Mahmood, Muhammad Tariq
    [J]. IEEE ACCESS, 2019, 7 : 60736 - 60751
  • [16] Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations
    Fagnant, Daniel J.
    Kockelman, Kara
    [J]. TRANSPORTATION RESEARCH PART A-POLICY AND PRACTICE, 2015, 77 : 167 - 181
  • [17] Flannagan C.A., 2020, Crash Avoidance Technology Evaluation Using Real-World Crash Data
  • [18] Foubert N., 2012, 2012 IEEE INT S MED, P1
  • [19] Deep Neural Networks for Human Activity Recognition With Wearable Sensors: Leave-One-Subject-Out Cross-Validation for Model Selection
    Gholamiangonabadi, Davoud
    Kiselov, Nikita
    Grolinger, Katarina
    [J]. IEEE ACCESS, 2020, 8 : 133982 - 133994
  • [20] Gold C., 2013, P HUM FACT ERG SOC A, V57, P1938, DOI DOI 10.1177/1541931213571433