Deep reinforcement learning with predictive auxiliary task for autonomous train collision avoidance

被引:0
作者
Plissonneau, Antoine [1 ,2 ]
Jourdan, Luca [1 ]
Trentesaux, Damien [2 ]
Abdi, Lotfi [1 ]
Sallak, Mohamed [1 ,3 ]
Bekrar, Abdelghani [2 ]
Quost, Benjamin [1 ,3 ]
Schoen, Walter [1 ,3 ]
机构
[1] Railenium, Valenciennes, France
[2] Univ Polytech Hauts De France, CNRS, LAMIH, UMR 8201, F-59313 Valenciennes, France
[3] Univ Technol Compiegne, CNRS, Heudiasyc Heurist & Diagnost Syst Complexes, CS 60 319, F-60203 Compiegne, France
关键词
Autonomous train; Collision avoidance; Deep reinforcement learning; Auxiliary task; Interpretability; NEURAL-NETWORKS; NAVIGATION; LEVEL;
D O I
10.1016/j.jrtpm.2024.100453
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
The contribution of this paper consists of a deep reinforcement learning (DRL) based method for autonomous train collision avoidance. While DRL applied to autonomous vehicles' collision avoidance has shown interesting results compared to traditional methods, train -like vehicles are not currently covered. In addition, DRL applied to collision avoidance suffers from sparse rewards, which can lead to poor convergence and long training time. To overcome these limitations, this paper proposes a method for training a reinforcement learning (RL) agent for collision avoidance using local obstacle information mapped into occupancy grids. This method also integrates a network architecture containing a predictive auxiliary task consisting in future state prediction and encouraging the intermediate representation to be predictive of obstacle trajectories. A comparison study conducted on multiple simulated scenarios demonstrates that the trained policy outperforms other deep-learning-based policies as well as human driving in terms of both safety and efficiency. As a first step toward the certification of a DRL based method, this paper proposes to approximate the policy learned by the RL agent with an interpretable decision tree. Although this approximation results in a loss of performance, it enables a safety analysis of the learned function and thus paves the way to use the strengths of RL in certifiable algorithms. As this work is pioneering the use of RL for collision avoidance of rail-guided vehicles, and to facilitate future work by other engineers and researchers, a RL-ready simulator is provided with this paper.
引用
收藏
页数:20
相关论文
共 50 条
[41]   Bio-Inspired Collision Avoidance in Swarm Systems via Deep Reinforcement Learning [J].
Na, Seongin ;
Niu, Hanlin ;
Lennox, Barry ;
Arvin, Farshad .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (03) :2511-2526
[42]   Aircraft collision avoidance modeling and optimization using deep reinforcement learning [J].
Park K.-W. ;
Kim J.-H. .
Journal of Institute of Control, Robotics and Systems, 2021, 27 (09) :652-659
[43]   Unexpected Collision Avoidance Driving Strategy Using Deep Reinforcement Learning [J].
Kim, Myounghoe ;
Lee, Seongwon ;
Lim, Jaehyun ;
Choi, Jongeun ;
Kang, Seong Gu .
IEEE ACCESS, 2020, 8 :17243-17252
[44]   Auxiliary Task-Based Deep Reinforcement Learning for Quantum Control [J].
Zhou, Shumin ;
Ma, Hailan ;
Kuang, Sen ;
Dong, Daoyi .
IEEE TRANSACTIONS ON CYBERNETICS, 2025, 55 (02) :712-725
[45]   Obstacle avoidance planning of autonomous vehicles using deep reinforcement learning [J].
Qian, Yubin ;
Feng, Song ;
Hu, Wenhao ;
Wang, Wanqiu .
ADVANCES IN MECHANICAL ENGINEERING, 2022, 14 (12)
[46]   Deep reinforcement learning with dynamic window approach based collision avoidance path planning for maritime autonomous surface ships [J].
Wu, Chuanbo ;
Yu, Wangneng ;
Li, Guangze ;
Liao, Weiqiang .
OCEAN ENGINEERING, 2023, 284
[47]   Collision avoidance for a small drone with a monocular camera using deep reinforcement learning in an indoor environment [J].
Kim M. ;
Kim J. ;
Jung M. ;
Oh H. .
Journal of Institute of Control, Robotics and Systems, 2020, 26 (06) :399-411
[48]   Optimizing Multi-Vessel Collision Avoidance Decision Making for Autonomous Surface Vessels: A COLREGs-Compliant Deep Reinforcement Learning Approach [J].
Xie, Weidong ;
Gang, Longhui ;
Zhang, Mingheng ;
Liu, Tong ;
Lan, Zhixun .
JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2024, 12 (03)
[49]   Collision-avoidance under COLREGS for unmanned surface vehicles via deep reinforcement learning [J].
Ma, Yong ;
Zhao, Yujiao ;
Wang, Yulong ;
Gan, Langxiong ;
Zheng, Yuanzhou .
MARITIME POLICY & MANAGEMENT, 2020, 47 (05) :665-686
[50]   Multi-robot Target Encirclement Control with Collision Avoidance via Deep Reinforcement Learning [J].
Ma, Junchong ;
Lu, Huimin ;
Xiao, Junhao ;
Zeng, Zhiwen ;
Zheng, Zhiqiang .
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2020, 99 (02) :371-386