Decision making of autonomous vehicles in lane change scenarios: Deep reinforcement learning approaches with risk awareness

被引:160
作者
Li, Guofa [1 ]
Yang, Yifan [1 ]
Li, Shen [2 ]
Qu, Xingda [1 ]
Lyu, Nengchao [3 ]
Li, Shengbo Eben [4 ]
机构
[1] Shenzhen Univ, Inst Human Factors & Ergon, Coll Mechatron & Control Engn, Shenzhen 518060, Peoples R China
[2] Tsinghua Univ, Dept Civil Engn, Beijing 100084, Peoples R China
[3] Wuhan Univ Technol, Intelligent Transportat Syst Res Ctr, Wuhan 430063, Peoples R China
[4] Tsinghua Univ, Sch Vehicle & Mobil, State Key Lab Automot Safety & Energy, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Driving safety; Driving risk; Autonomous vehicle; Driver assistance system; Reinforcement learning; FRAMEWORK; EFFICIENT; BEHAVIOR;
D O I
10.1016/j.trc.2021.103452
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
Driving safety is the most important element that needs to be considered for autonomous vehicles (AVs). To ensure driving safety, we proposed a lane change decision-making framework based on deep reinforcement learning to find a risk-aware driving decision strategy with the minimum expected risk for autonomous driving. Firstly, a probabilistic-model based risk assessment method was proposed to assess the driving risk using position uncertainty and distance-based safety metrics. Then, a risk aware decision making algorithm was proposed to find a strategy with the minimum expected risk using deep reinforcement learning. Finally, our proposed methods were evaluated in CARLA in two scenarios (one with static obstacles and one with dynamically moving vehicles). The results show that our proposed methods can generate robust safe driving strategies and achieve better driving performances than previous methods.
引用
收藏
页数:18
相关论文
共 64 条
[11]  
Dosovitskiy A., 2017, C ROBOT LEARNING
[12]   Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors [J].
Duan, Jingliang ;
Guan, Yang ;
Li, Shengbo Eben ;
Ren, Yangang ;
Sun, Qi ;
Cheng, Bo .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) :6584-6598
[13]   Hierarchical reinforcement learning for self-driving decision-making without reliance on labelled driving data [J].
Duan, Jingliang ;
Eben Li, Shengbo ;
Guan, Yang ;
Sun, Qi ;
Cheng, Bo .
IET INTELLIGENT TRANSPORT SYSTEMS, 2020, 14 (05) :297-305
[14]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[15]   Maneuver-Based Trajectory Planning for Highly Autonomous Vehicles on Real Road With Traffic and Driver Interaction [J].
Glaser, Sebastien ;
Vanholme, Benoit ;
Mammar, Said ;
Gruyer, Dominique ;
Nouveliere, Lydie .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2010, 11 (03) :589-606
[16]   A survey of deep learning techniques for autonomous driving [J].
Grigorescu, Sorin ;
Trasnea, Bogdan ;
Cocias, Tiberiu ;
Macesanu, Gigel .
JOURNAL OF FIELD ROBOTICS, 2020, 37 (03) :362-386
[17]   Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving [J].
Hoel, Carl-Johan ;
Driggs-Campbell, Katherine ;
Wolff, Krister ;
Laine, Leo ;
Kochenderfer, Mykel J. .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2020, 5 (02) :294-305
[18]   A Motion Planning and Tracking Framework for Autonomous Vehicles Based on Artificial Potential Field Elaborated Resistance Network Approach [J].
Huang, Yanjun ;
Ding, Haitao ;
Zhang, Yubiao ;
Wang, Hong ;
Cao, Dongpu ;
Xu, Nan ;
Hu, Chuan .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2020, 67 (02) :1376-1386
[19]  
Ioffe S, 2015, PR MACH LEARN RES, V37, P448
[20]  
Kahn Gregory, 2017, ARXIV PREPRINT ARXIV