Parallel Planning: A New Motion Planning Framework for Autonomous Driving

被引:108
作者
Chen, Long [1 ]
Hu, Xuemin [2 ]
Tian, Wei [3 ]
Wang, Hong [4 ]
Cao, Dongpu [4 ]
Wang, Fei-Yue [5 ,6 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou 510275, Guangdong, Peoples R China
[2] Hubei Univ, Sch Comp Sci & Informat Engn, Wuhan 430062, Hubei, Peoples R China
[3] Karlsruhe Inst Technol, Inst Measurement & Control Syst, D-76131 Karlsruhe, Germany
[4] Univ Waterloo, Dept Mech & Mechatron Engn, 200 Univ Ave West, Waterloo, ON N2L 3G1, Canada
[5] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing, Peoples R China
[6] Natl Univ Def Technol, Mil Computat Expt & Parallel Syst Technol, Res Ctr, Changsha 410073, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous driving; artificial traffic scene; deep learning; emergencies; motion planning; parallel planning; MANAGEMENT; ROAD;
D O I
10.1109/JAS.2018.7511186
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Motion planning is one of the most significant technologies for autonomous driving. To make motion planning models able to learn from the environment and to deal with emergency situations, a new motion planning framework called as "parallel planning" is proposed in this paper. In order to generate sufficient and various training samples, artificial traffic scenes are firstly constructed based on the knowledge from the reality. A deep planning model which combines a convolutional neural network (CNN) with the Long Short-Term Memory module (LSTM) is developed to make planning decisions in an end-toend mode. This model can learn from both real and artificial traffic scenes and imitate the driving style of human drivers. Moreover, a parallel deep reinforcement learning approach is also presented to improve the robustness of planning model and reduce the error rate. To handle emergency situations, a hybrid generative model including a variational auto-encoder (VAE) and a generative adversarial network (GAN) is utilized to learn from virtual emergencies generated in artificial traffic scenes. While an autonomous vehicle is moving, the hybrid generative model generates multiple video clips in parallel, which correspond to different potential emergency scenarios. Simultaneously, the deep planning model makes planning decisions for both virtual and current real scenes. The final planning decision is determined by analysis of real observations. Leveraging the parallel planning approach, the planner is able to make rational decisions without heavy calculation burden when an emergency occurs.
引用
收藏
页码:236 / 246
页数:11
相关论文
共 38 条
[1]  
[Anonymous], IEEE INTELLIG TRANSP
[2]  
[Anonymous], 2006, Planning algorithms, Complexity
[3]  
[Anonymous], 2015, ARXIV150605751
[4]  
[Anonymous], 2016, P 29 C NEUR INF PROC
[5]  
[白天翔 Bai Tianxiang], 2017, [自动化学报, Acta Automatica Sinica], V43, P161
[6]   Mutual Information-Based Multi-AUV Path Planning for Scalar Field Sampling Using Multidimensional RRT* [J].
Cui, Rongxin ;
Li, Yang ;
Yan, Weisheng .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2016, 46 (07) :993-1004
[7]   Developing operator capacity estimates for supervisory control of autonomous vehicles [J].
Cummings, M. L. ;
Guerlain, Stephanie .
HUMAN FACTORS, 2007, 49 (01) :1-15
[8]   Parallel Driving in CPSS: A Unified Approach for Transport Automation and Vehicle Intelligence [J].
Wang, Fei-Yue ;
Zheng, Nan-Ning ;
Cao, Dongpu ;
Martinez, Clara Marina ;
Li, Li ;
Liu, Teng .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2017, 4 (04) :577-587
[9]  
Fragkiadaki K., 2015, CORR, Vabs/1508.00271
[10]   Visual Teach and Repeat for Long-Range Rover Autonomy [J].
Furgale, Paul ;
Barfoot, Timothy D. .
JOURNAL OF FIELD ROBOTICS, 2010, 27 (05) :534-560