Autonomous vehicle extreme control for emergency collision avoidance via Reachability-Guided reinforcement learning

被引:4
作者
Zhao, Shiyue [1 ]
Zhang, Junzhi [1 ,2 ]
He, Chengkun [1 ]
Ji, Yuan [3 ]
Huang, Heye [4 ]
Hou, Xiaohui [1 ]
机构
[1] Tsinghua Univ, Sch Vehicle & Mobil, Beijing, Peoples R China
[2] Tsinghua Univ, State Key Lab Intelligent Green Vehicle & Mobil, Beijing, Peoples R China
[3] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore 639798, Singapore
[4] Univ Wisconsin Madison, Dept Civil & Environm Engn, Madison, WI 53706 USA
关键词
Autonomous vehicles; Collision avoidance; Extreme control; Min-BRT; reachability-guided RL;
D O I
10.1016/j.aei.2024.102801
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergency collision avoidance capabilities of autonomous vehicles (AVs) are crucial for enhancing their active safety performance, particularly in extreme scenarios where standard methods fall short. This study introduces an Extreme Maneuver Controller (EMC) for AVs, utilizing reachability-guided reinforcement learning (RL) to address these challenging situations. By applying pseudospectral methods, we solve the minimum backward reachable tube (Min-BRT) to identify regions where conventional avoidance maneuvers are infeasible, establishing a theoretical basis for triggering extreme maneuvers. A novel controller, employing reachabilityguided RL, enables vehicles to execute extreme maneuvers to escape these critical regions. During training, the value function derived from the Min-BRT solution informs the initialization of the Critic networks, enhancing training efficiency. Real-world scenario-based experimental results with actual vehicles validate that the proposed policy, effectively executes beyond-the-limit maneuvers, mitigating collision risks under emergency condition. Furthermore, these extreme maneuvers are executed with minimal deviation from the original driving objectives, ensuring a smooth and stable transition upon completion of extreme maneuvers.
引用
收藏
页数:16
相关论文
共 36 条
[1]  
Alakuijala M., 2021, arXiv
[2]   Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles [J].
Aradi, Szilard .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) :740-759
[3]   High-Speed Autonomous Drifting With Deep Reinforcement Learning [J].
Cai, Peide ;
Mei, Xiaodong ;
Tai, Lei ;
Sun, Yuxiang ;
Liu, Ming .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02) :1247-1254
[4]   Dynamic Drifting Control for General Path Tracking of Autonomous Vehicles [J].
Chen, Guoying ;
Zhao, Xuanming ;
Gao, Zhenhai ;
Hua, Min .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (03) :2527-2537
[5]   Path Tracking and Handling Stability Control Strategy With Collision Avoidance for the Autonomous Vehicle Under Extreme Conditions [J].
Chen, Yong ;
Chen, Sizhong ;
Ren, Hongbin ;
Gao, Zepeng ;
Liu, Zheng .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (12) :14602-14617
[6]  
Cutler M, 2016, IEEE INT CONF ROBOT, P5442, DOI 10.1109/ICRA.2016.7487756
[7]  
Dong M., 2022, IEEE Trans. Ind. Inf.
[8]  
Haarnoja T, 2019, Arxiv, DOI arXiv:1812.05905
[9]  
Hendricks DL., 2001, RELATIVE FREQUENCY U
[10]   Secondary crash mitigation controller after rear-end collisions using reinforcement learning [J].
Hou, Xiaohui ;
Gan, Minggang ;
Zhang, Junzhi ;
Zhao, Shiyue ;
Ji, Yuan .
ADVANCED ENGINEERING INFORMATICS, 2023, 58