A Safety-Enhanced Reinforcement Learning-Based Decision-Making and Motion Planning Method for Left-Turning at Unsignalized Intersections for Automated Vehicles

被引:0
作者
Zhang, Lei [1 ]
Cheng, Shuhui [2 ]
Wang, Zhenpo [2 ]
Liu, Jizheng [2 ]
Wang, Mingqiang [2 ]
机构
[1] Beijing Inst Technol, Adv Technol Res Inst, Beijing 100081, Peoples R China
[2] Beijing Inst Technol, Natl Engn Res Ctr Elect Vehicles, Beijing 100081, Peoples R China
关键词
Decision making; Safety; Planning; Turning; Hidden Markov models; Trajectory; Switches; Automated vehicles; deep reinforcement learning; partially observable Markov decision process; turning intention recognition; MODEL; PREDICTION; STRATEGY;
D O I
10.1109/TVT.2024.3424523
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Left-turning at unsignalized intersections poses significant challenges for automated vehicles. On this regard, Deep Reinforcement Learning (DRL) methods can achieve better traffic efficiency and success rate than rule-based methods, but they occasionally lead to collisions. This paper proposes a safety-enhanced method that integrates the DRL and the Dimensionality Reduction Monte Carlo Tree Search (DRMCTS) algorithm to achieve safety-enhanced trajectory planning at unsignalized intersections. First, DRMCTS is employed to address the partially observable Markov decision process problem. Through dimensionality reduction, it effectually enhances computational efficiency and problem-solving performance. Then a unified framework is introduced by simultaneously implementing DRL and the Gaussian Mixture Model Hidden Markov Model (GMM-HMM) in real-time. DRL determines actions in the current state while GMM-HMM identifies the turning intentions of surrounding vehicles (SVs). Under safe driving conditions, DRL makes decisions and outputs longitudinal acceleration with optimized ride comfort and traffic efficiency. When unsafe driving conditions are detected, DRMCTS would be activated to generate a collision-free trajectory to enhance the ego vehicle's driving safety. Through comprehensive simulations, the proposed scheme demonstrates superior traffic efficiency and reduced collision rates at unsignalized intersections with multiple SVs present.
引用
收藏
页码:16375 / 16388
页数:14
相关论文
共 40 条
[1]   STATISTICAL INFERENCE FOR PROBABILISTIC FUNCTIONS OF FINITE STATE MARKOV CHAINS [J].
BAUM, LE ;
PETRIE, T .
ANNALS OF MATHEMATICAL STATISTICS, 1966, 37 (06) :1554-&
[2]  
Bock J, 2020, IEEE INT VEH SYM, P1929, DOI [10.1109/iv47402.2020.9304839, 10.1109/IV47402.2020.9304839]
[3]   Closing the Planning-Learning Loop With Application to Autonomous Driving [J].
Cai, Panpan ;
Hsu, David .
IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (02) :998-1011
[4]   Human-Like Control for Automated Vehicles and Avoiding "Vehicle Face-Off" in Unprotected Left Turn Scenarios [J].
Chen, Jin ;
Sun, Dihua ;
Zhao, Min .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (02) :1609-1618
[5]   An Autonomous T-Intersection Driving Strategy Considering Oncoming Vehicles Based on Connected Vehicle Technology [J].
Chen, Yimin ;
Zha, Jingqiang ;
Wang, Junmin .
IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2019, 24 (06) :2779-2790
[6]  
Cheng CJ, 2019, IEEE INT C INTELL TR, P2286, DOI 10.1109/ITSC.2019.8917396
[7]   Deep Reinforcement Learning Based Decision-Making Strategy of Autonomous Vehicle in Highway Uncertain Driving Environments [J].
Deng, Huifan ;
Zhao, Youqun ;
Wang, Qiuwei ;
Nguyen, Anh-Tu .
AUTOMOTIVE INNOVATION, 2023, 6 (03) :438-452
[8]   Uncertainty-Aware Model-Based Offline Reinforcement Learning for Automated Driving [J].
Diehl, Christopher ;
Sievernich, Timo Sebastian ;
Kruger, Martin ;
Hoffmann, Frank ;
Bertram, Torsten .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (02) :1167-1174
[9]   Offline Reinforcement Learning for Autonomous Driving with Real World Driving Data [J].
Fang, Xing ;
Zhang, Qichao ;
Gao, Yinfeng ;
Zhao, Dongbin .
2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, :3417-3422
[10]  
Fujimoto S, 2018, PR MACH LEARN RES, V80