Learning Dynamic Graph for Overtaking Strategy in Autonomous Driving

被引:7
作者
Hu, Xuemin [1 ]
Liu, Yanfang [1 ]
Tang, Bo [2 ]
Yan, Junchi [3 ]
Chen, Long [4 ,5 ]
机构
[1] Hubei Univ, Sch Artificial Intelligence, Wuhan 430062, Hubei, Peoples R China
[2] Worcester Polytech Inst, Dept Elect & Comp Engn, Worcester, MA 01609 USA
[3] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[4] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[5] Waytous Inc, Beijing 100083, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous driving; graph convolutional network; trainable adjacency matrix; overtaking; dynamic graph;
D O I
10.1109/TITS.2023.3287223
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Automatic overtaking is a challenging task for self-driving vehicles. Traditional rule-based methods for overtaking in autonomous driving heavily rely on many predefined rules and are difficult to apply in complex driving scenarios. Learning-based methods usually use convolutional networks, recurrent networks, and multilayer perceptrons, etc., to extract features from environments, but they fail to effectively represent geometric and interactive information among traffic participants. Classic graph convolutional networks (GCNs) have the ability of represent graph-structural information but are limited to stable relationship representation due to the fixed adjacency matrix when applied in autonomous driving. In this paper, we propose a novel dynamic graph learning method based on a graph convolutional network with a trainable adjacency matrix (TAM-GCN) to enable the learning of dynamic relationships among different nodes in an ever-changing driving scene. In addition, we develop a planning method for overtaking strategy in autonomous driving, where the proposed TAM-GCN is used to extract the spatial graph-structural features, select appropriate overtaking time, and generate efficient overtaking actions. The proposed model is trained using the imitation learning method. We conduct comprehensive experiments in both closed-loop and open-loop testing in the CARLA simulator and compare our method with state-of-the-art methods. Experimental results demonstrate the proposed method achieves better accuracy, safety and overtaking performance than existing methods.
引用
收藏
页码:11921 / 11933
页数:13
相关论文
共 44 条
[1]  
[Anonymous], 2017, Electron. Imaging, DOI 10.2352/
[2]  
Bojarski Mariusz, 2016, arXiv
[3]   Deep representation learning for human motion prediction and classification [J].
Butepage, Judith ;
Black, Michael J. ;
Kragic, Danica ;
Kjellstrom, Hedvig .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1591-1599
[4]  
Chen CG, 2019, IEEE INT CONF ROBOT, P6015, DOI [10.1109/icra.2019.8794134, 10.1109/ICRA.2019.8794134]
[5]  
Chen JY, 2019, IEEE INT C INT ROBOT, P2884, DOI [10.1109/iros40897.2019.8968225, 10.1109/IROS40897.2019.8968225]
[6]   Milestones in Autonomous Driving and Intelligent Vehicles: Survey of Surveys [J].
Chen, Long ;
Li, Yuchen ;
Huang, Chao ;
Li, Bai ;
Xing, Yang ;
Tian, Daxin ;
Li, Li ;
Hu, Zhongxu ;
Na, Xiaoxiang ;
Li, Zixuan ;
Teng, Siyu ;
Lv, Chen ;
Wang, Jinjun ;
Cao, Dongpu ;
Zheng, Nanning ;
Wang, Fei-Yue .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (02) :1046-1056
[7]   Parallel Driving OS: A Ubiquitous Operating System for Autonomous Driving in CPSS [J].
Chen, Long ;
Zhang, Yunqing ;
Tian, Bin ;
Ai, Yunfeng ;
Cao, Dongpu ;
Wang, Fei-Yue .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2022, 7 (04) :886-895
[8]   Conditional DQN-Based Motion Planning With Fuzzy Logic for Autonomous Driving [J].
Chen, Long ;
Hu, Xuemin ;
Tang, Bo ;
Cheng, Yu .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (04) :2966-2977
[9]   Real-Time Trajectory Planning for Autonomous Driving with Gaussian Process and Incremental Refinement [J].
Cheng, Jie ;
Chen, Yingbing ;
Zhang, Qingwen ;
Gan, Lu ;
Liu, Chengju ;
Liu, Ming .
2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, :8999-9005
[10]  
Chitta K., 2021, ARXIV