Yaw-guided end-to-end imitation learning for autonomous driving in urban environments

被引:0
作者
Xu, Qingchao [1 ]
Yang, Xingfu [1 ]
Zhang, Shilong [1 ]
Liu, Yandong [1 ]
机构
[1] Dalian Univ, Sch Software Engn, Key Lab Adv Design & Intelligent Comp, Minist Educ, Dalian 116622, Peoples R China
关键词
End-to-end; Imitation learning; Autonomous driving; Yaw guidance;
D O I
10.1007/s13042-025-02751-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing imitation learning methods, such as CIL, face significant limitations in data utilization and generalization ability when addressing the road-option problem in urban environments. These methods often struggle with insufficient parameter optimization in branch networks and fail to adapt to dynamic and complex scenarios, such as intersections with dense traffic. To overcome these challenges, we propose Yaw-guided Imitation Learning with ResNet34 Attention (YILRatt), a novel end-to-end autonomous driving framework that leverages yaw angle guidance and an attention mechanism to enhance sample efficiency and adaptability. YILRatt utilizes yaw information derived from navigation map trajectories, eliminating the need for HD maps and enabling fully end-to-end operation with consumer-level GPS receivers. The integration of ResNet34 and the attention mechanism ensures accurate perception and provides interpretability through attention heatmaps, which reveal causal relationships between decision-making and scene perception. Experimental results on the Carla 0.9.11 simulator, including improved benchmarks CoRL2017 and NoCrash, demonstrate that YILRatt achieves a 26.27% higher success rate than CILRS. This improvement is particularly evident in dense traffic scenarios, where the attention mechanism effectively captures dynamic obstacles and enhances navigation performance. By addressing the limitations of existing methods, YILRatt offers a robust and interpretable solution for autonomous driving in complex urban environments.
引用
收藏
页数:13
相关论文
共 26 条
[1]  
Bojarski Mariusz, 2016, arXiv
[2]  
Cai P, 2020, Learning scalable self-driving policies for generic traffic scenarios
[3]  
Chen Dian., 2022, arXiv
[4]   Exploring Behavioral Patterns of Lane Change Maneuvers for Human-Like Autonomous Driving [J].
Chen, Yaoyu ;
Li, Guofa ;
Li, Shen ;
Wang, Wenjun ;
Li, Shengbo Eben ;
Cheng, Bo .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) :14322-14335
[5]   Exploring the Limitations of Behavior Cloning for Autonomous Driving [J].
Codevilla, Felipe ;
Santana, Eder ;
Lopez, Antonio M. ;
Gaidon, Adrien .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9328-9337
[6]  
Codevilla F, 2018, IEEE INT CONF ROBOT, P4693
[7]  
Hawke J, 2020, IEEE INT CONF ROBOT, P251, DOI [10.1109/icra40945.2020.9197408, 10.1109/ICRA40945.2020.9197408]
[8]   Learning Accurate and Human-Like Driving using Semantic Maps and Attention [J].
Hecker, Simon ;
Dai, Dengxin ;
Liniger, Alexander ;
Hahner, Martin ;
Van Gool, Luc .
2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, :2346-2353
[9]   End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners [J].
Hecker, Simon ;
Dai, Dengxin ;
Van Gool, Luc .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :449-468
[10]   Multi-Modal Sensor Fusion-Based Deep Neural Network for End-to-End Autonomous Driving With Scene Understanding [J].
Huang, Zhiyu ;
Lv, Chen ;
Xing, Yang ;
Wu, Jingda .
IEEE SENSORS JOURNAL, 2021, 21 (10) :11781-11790