Driver Anomaly Quantification for Intelligent Vehicles: A Contrastive Learning Approach With Representation Clustering

被引:39
作者
Hu, Zhongxu [1 ]
Xing, Yang [2 ]
Gu, Weihao [3 ]
Cao, Dongpu [4 ]
Lv, Chen [1 ]
机构
[1] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore 639798, Singapore
[2] Cranfield Univ, Ctr Autonomous & Cyber Phys Syst, Bedford MK43 0AL, England
[3] Haomo AI, Beijing, Peoples R China
[4] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100192, Peoples R China
来源
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES | 2023年 / 8卷 / 01期
关键词
Vehicles; Feature extraction; Task analysis; Training; Computational modeling; Measurement; Data models; Driver anomaly; online quantification; continuous variable; contrastive learning; representation clustering; CONVOLUTIONAL NEURAL-NETWORK; SAFE;
D O I
10.1109/TIV.2022.3163458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Driver anomaly quantification is a fundamental capability to support human-centric driving systems of intelligent vehicles. Existing studies usually treat it as a classification task and obtain discrete levels for abnormalities. Meanwhile, the existing data-driven approaches depend on the quality of dataset and provide limited recognition capability for unknown activities. To overcome these challenges, this paper proposes a contrastive learning approach with the aim of building a model that can quantify driver anomalies with a continuous variable. In addition, a novel clustering supervised contrastive loss is proposed to optimize the distribution of the extracted representation vectors to improve the model performance. Compared with the typical contrastive loss, the proposed loss can better cluster normal representations while separating abnormal ones. The abnormality of driver activity can be quantified by calculating the distance to a set of representations of normal activities rather than being produced as the direct output of the model. The experiment results with datasets under different modes demonstrate that the proposed approach is more accurate and robust than existing ones in terms of recognition and quantification of unknown abnormal activities.
引用
收藏
页码:37 / 47
页数:11
相关论文
共 52 条
[1]  
Abouelnaga Y., 2018, NEURAL INF PROCESS S
[2]   Detection and Evaluation of Driver Distraction Using Machine Learning and Fuzzy Logic [J].
Aksjonov, Andrei ;
Nedoma, Pavel ;
Vodovozov, Valery ;
Petlenkov, Eduard ;
Herrmann, Martin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019, 20 (06) :2048-2059
[3]   Distracted driver classification using deep learning [J].
Alotaibi, Munif ;
Alotaibi, Bandar .
SIGNAL IMAGE AND VIDEO PROCESSING, 2020, 14 (03) :617-624
[4]  
[Anonymous], 2018, PROC INT C LEARN REP
[5]  
[Anonymous], 2019, SYMMETRY-BASEL, DOI DOI 10.3390/sym11091066
[6]   Towards Computationally Efficient and Realtime Distracted Driver Detection With MobileVGG Network [J].
Baheti, Bhakti ;
Talbar, Sanjay ;
Gajre, Suhas .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2020, 5 (04) :565-574
[7]   Detection of Distracted Driver using Convolutional Neural Network [J].
Baheti, Bhakti ;
Gajre, Suhas ;
Talbar, Sanjay .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :1145-1151
[8]   Deep CNN, Body Pose, and Body-Object Interaction Features for Drivers' Activity Monitoring [J].
Behera, Ardhendu ;
Wharton, Zachary ;
Keidel, Alexander ;
Debnath, Bappaditya .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (03) :2874-2881
[9]  
Behera A, 2018, 2018 15TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), P343, DOI 10.1109/AVSS.2018.8639158
[10]   Attend and Guide (AG-Net): A Keypoints-Driven Attention-Based Deep Network for Image Recognition [J].
Bera, Asish ;
Wharton, Zachary ;
Liu, Yonghuai ;
Bessis, Nik ;
Behera, Ardhendu .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :3691-3704