Towards Sustainable Safe Driving: A Multimodal Fusion Method for Risk Level Recognition in Distracted Driving Status

被引:5
作者
Chen, Huiqin [1 ]
Liu, Hao [1 ]
Chen, Hailong [1 ]
Huang, Jing [2 ]
机构
[1] Hangzhou Dianzi Univ, Coll Mech Engn, Hangzhou 310018, Peoples R China
[2] Hunan Univ, Coll Mech & Vehicle Engn, Changsha 410082, Peoples R China
基金
中国国家自然科学基金;
关键词
distracted driving status; vision-sensor fusion transformer; multimodal information; risk level recognition; PERFORMANCE; VEHICLES;
D O I
10.3390/su15129661
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Precise driving status recognition is a prerequisite for human-vehicle collaborative driving systems towards sustainable road safety. In this study, a simulated driving platform was built to capture multimodal information simultaneously, including vision-modal data representing driver behaviour and sensor-modal data representing vehicle motion. Multisource data are used to quantify the risk of distracted driving status from four levels, safe driving, slight risk, moderate risk, and severe risk, rather than detecting action categories. A multimodal fusion method called vision-sensor fusion transformer (V-SFT) was proposed to incorporate the vision-modal of driver behaviour and sensor-modal data of vehicle motion. Feature concatenation was employed to aggregate representations of different modalities. Then, successive internal interactions were performed to consider the spatiotemporal dependency. Finally, the representations were clipped and mapped into four risk level label spaces. The proposed approach was evaluated under different modality inputs on the collected datasets and compared with some baseline methods. The results showed that V-SFT achieved the best performance with an recognition accuracy of 92.0%. It also indicates that fusing multimodal information effectively improves driving status understanding, and V-SFT extensibility is conducive to integrating more modal data.
引用
收藏
页数:22
相关论文
共 57 条
[1]  
Abouelnaga Y, 2018, Arxiv, DOI [arXiv:1706.09498, DOI 10.48550/ARXIV.1706.09498]
[2]   Distracted driver classification using deep learning [J].
Alotaibi, Munif ;
Alotaibi, Bandar .
SIGNAL IMAGE AND VIDEO PROCESSING, 2020, 14 (03) :617-624
[3]   Autonomous Vehicle with Emergency Braking Algorithm Based on Multi-Sensor Fusion and Super Twisting Speed Controller [J].
Alsuwian, Turki ;
Saeed, Rana Basharat ;
Amin, Arslan Ahmed .
APPLIED SCIENCES-BASEL, 2022, 12 (17)
[4]  
Alvarez-Coello D, 2019, IEEE INT VEH SYM, P165, DOI 10.1109/IVS.2019.8814069
[5]  
[Anonymous], 2022, DISTR DRIV STAT
[6]   Multi-sensor data fusion using support vector machine for motor fault detection [J].
Banerjee, Tribeni Prasad ;
Das, Swagatam .
INFORMATION SCIENCES, 2012, 217 :96-107
[7]  
Behera Ardhendu, 2019, Pattern Recognition. 40th German Conference, GCPR 2018. Proceedings: Lecture Notes in Computer Science (LNCS 11269), P298, DOI 10.1007/978-3-030-12939-2_21
[8]   The long road home from distraction: Investigating the time-course of distraction recovery in driving [J].
Bowden, Vanessa K. ;
Loft, Shayne ;
Wilson, Micah K. ;
Howard, James ;
Visser, Troy A. W. .
ACCIDENT ANALYSIS AND PREVENTION, 2019, 124 :23-32
[9]  
Chen H., 2021, P 5 CAA INT C VEH CO, P1
[10]   Investigation Into the Effect of an Intersection Crash Warning System on Driving Performance in a Simulator [J].
Chen, Huiqin ;
Cao, Libo ;
Logan, David B. .
TRAFFIC INJURY PREVENTION, 2011, 12 (05) :529-537