An Autonomous Driving Approach Based on Trajectory Learning Using Deep Neural Networks

被引:0
作者
Dan Wang
Canye Wang
Yulong Wang
Hang Wang
Feng Pei
机构
[1] GAC R&D Center,State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body
[2] Hunan University,undefined
来源
International Journal of Automotive Technology | 2021年 / 22卷
关键词
Autonomous driving; Trajectory learning; CNN_Raw-RNN; Pilot and copilot;
D O I
暂无
中图分类号
学科分类号
摘要
Autonomous driving approaches today are mainly based on perception-planning-action modular pipelines and the End2End paradigm respectively. The End2End paradigm is a strategy that directly maps raw sensor data to vehicle control actions. This strategy is very promising and appealing because complex module design and cumbersome data labeling are avoided. Since this approach lacks a degree of interpretability, safety and practicability. we propose an autonomous driving approach based on trajectory learning using deep neural networks in this paper. In comparison to End2End algorithm, it is found that the trajectory learning algorithm performs better in autonomous driving. As for trajectory learning algorithm, the CNN_Raw-RNN network structure is established, which is verified to be more effective than the original CNN_LSTM network structure. Besides, we propose an autonomous driving architecture of a pilot and copilot combination. The pilot is responsible for trajectory prediction via imitation learning with labeled driving trajectories, while the copilot is a safety module that is employed to verify the effectiveness of the vehicle trajectory by the results of the semantic segmentation auxiliary task. The proposed autonomous driving architecture is verified with a real car on urban roads without manual intervention within 40 km.
引用
收藏
页码:1517 / 1528
页数:11
相关论文
共 41 条
  • [1] Geiger A(2013)Vision meets robotics: the kitti dataset The Int. J. Robotics Research 32 1231-1237
  • [2] Lenz P(2019)NeuroTrajectory: a neuroevolutionary approach to local state trajectory learning for autonomous vehicles IEEE Robotics and Automation Letters 4 3441-3448
  • [3] Stiller C(2012)Imitation learning by coaching Advances in Neural Information Processing Systems 25 3149-3157
  • [4] Urtasun R(2018)Introduction to Kalman filter and its applications Introduction and Implementations of the Kalman Filter 1 1-16
  • [5] Grigorescu S M(2013)Reinforcement learning in robotics: a survey The Int. J. Robotics Research 32 1238-1274
  • [6] Trasnea B(2015)Human-level control through deep reinforcement learning Nature 518 529-533
  • [7] Marina L(2019)End-to-end self-driving using deep neural networks with multi-auxiliary tasks Automotive Innovation 2 127-136
  • [8] Vasilcoi A(undefined)undefined undefined undefined undefined-undefined
  • [9] Cocias T(undefined)undefined undefined undefined undefined-undefined
  • [10] He H(undefined)undefined undefined undefined undefined-undefined