Highway Decision-Making and Motion Planning for Autonomous Driving via Soft Actor-Critic

被引:76
作者
Tang, Xiaolin [1 ]
Huang, Bing [1 ]
Liu, Teng [2 ]
Lin, Xianke [3 ]
机构
[1] Chongqing Univ, Coll Mech & Vehicle Engn, Chongqing 400044, Peoples R China
[2] Univ Waterloo, Dept Mech & Mechatron Engn, Waterloo, ON N2L 3G1, Canada
[3] Ontario Tech Univ, Dept Automot & Mechatron Engn, Oshawa, ON L1G 0C5, Canada
基金
中国国家自然科学基金;
关键词
Planning; Road transportation; Decision making; Autonomous vehicles; Aerospace electronics; Safety; Vehicle dynamics; Decision-making and planning; autonomous driving; highway driving scenario; continuous action space; deep reinforcement learning; soft actor-critic;
D O I
10.1109/TVT.2022.3151651
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this study, a decision-making and motion planning controller with continuous action space is constructed in the highway driving scenario based on deep reinforcement learning. In the decision-making and planning problem, the goal is to achieve the safety, efficiency, and comfort of automated vehicles. In the driving scenario, the surrounding vehicles are controlled by the intelligent driver model and a general model (minimizing overall braking induced by lane change, MOBIL), which enables them to react to the environment and mimic the vehicle interactions on the highway. Given the uncertainties in the driving conditions, a specific deep reinforcement learning technique, called soft actor-critic, is used to solve the decision-making and planning problem with continuous action space. Simulation results show that the proposed method can solve the decision-making and motion planning problem in the interactive traffic environment to carry out safe lane-change maneuvers and cruise at high speed. In addition, two control policies are developed with different weights on safety, efficiency, and comfort.
引用
收藏
页码:4706 / 4717
页数:12
相关论文
共 34 条
[1]  
Alizadeh A, 2019, IEEE INT C INTELL TR, P1399, DOI [10.1109/ITSC.2019.8917192, 10.1109/itsc.2019.8917192]
[2]   Economic Adaptive Cruise Control for Electric Vehicles Based on ADHDP in a Car-Following Scenario [J].
Chen, Xiyan ;
Yang, Jian ;
Zhai, Chunjie ;
Lou, Jiedong ;
Yan, Chenggang .
IEEE ACCESS, 2021, 9 :74949-74958
[3]   A Decision-Making Strategy for Vehicle Autonomous Braking in Emergency via Deep Reinforcement Learning [J].
Fu, Yuchuan ;
Li, Changle ;
Yu, Fei Richard ;
Luan, Tom H. ;
Zhang, Yao .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (06) :5876-5888
[4]  
Haarnoja T, 2017, PR MACH LEARN RES, V70
[5]  
Haarnoja T, 2018, PR MACH LEARN RES, V80
[6]  
Hoel CJ, 2018, IEEE INT C INTELL TR, P2148, DOI 10.1109/ITSC.2018.8569568
[7]   Driving Behavior Modeling Using Naturalistic Human Driving Data With Inverse Reinforcement Learning [J].
Huang, Zhiyu ;
Wu, Jingda ;
Lv, Chen .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) :10239-10251
[8]   General lane-changing model MOBIL for car-following models [J].
Kesting, Arne ;
Treiber, Martin ;
Helbing, Dirk .
TRANSPORTATION RESEARCH RECORD, 2007, (1999) :86-94
[9]   Deep Reinforcement Learning for Autonomous Driving: A Survey [J].
Kiran, B. Ravi ;
Sobh, Ibrahim ;
Talpaert, Victor ;
Mannion, Patrick ;
Al Sallab, Ahmad A. ;
Yogamani, Senthil ;
Perez, Patrick .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) :4909-4926
[10]  
Kong J, 2015, IEEE INT VEH SYM, P1094, DOI 10.1109/IVS.2015.7225830