Automatic Tracking Control Strategy of Autonomous Trains Considering Speed Restrictions: Using the Improved Offline Deep Reinforcement Learning Method

被引:1
作者
Liu, Wangyang [1 ]
Feng, Qingsheng [1 ]
Xiao, Shuai [1 ]
Li, Hong [2 ]
机构
[1] Dalian Jiaotong Univ, Sch Automat & Elect Engn, Dalian 116028, Peoples R China
[2] Dalian Jiaotong Univ, Sch Software, Dalian 116028, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Reinforcement learning; Safety; Rail transportation; Target tracking; Logic gates; Training; Collision avoidance; Tracking; Automatic driving; collision avoidance; train tracking; offline reinforcement learning; FRAMEWORK;
D O I
10.1109/ACCESS.2024.3405961
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Previous research on automatic control of high-speed trains in speed limit sections is insufficient. This article proposes a new offline reinforcement learning strategy for automatic tracking of autonomous trains. Firstly, the operating speed and deceleration starting point were determined for different speed limit scenarios. Then, a tracking controller based on the improved offline conservative Q-learning (CQL) algorithm was designed to avoid frequent interaction between the train and the environment. Selected an appropriate policy to implement the CQL algorithm. The data samples were reclassified to increase sample concentration. The value and strategy network structure was redesigned. The state space and action space of tracking trains were limited, and the dimension of state space was increased. A multi-objective reward function was designed to distinguish the tracking process of trains in different sections. The simulation results show that the proposed high-speed railway tracking interval automatic control algorithm is superior to traditional online reinforcement learning methods in terms of safety, comfort, and convergence efficiency.
引用
收藏
页码:75426 / 75441
页数:16
相关论文
共 49 条
  • [41] An Explainable and Robust Motion Planning and Control Approach for Autonomous Vehicle On-Ramping Merging Task Using Deep Reinforcement Learning
    Hu, Bo
    Jiang, Lei
    Zhang, Sunan
    Wang, Qiang
    IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2024, 10 (03): : 6488 - 6496
  • [42] Deep Reinforcement Learning and Deadbeat Hybrid Control Method for Hybrid Energy Storage System Considering Nonlinear Power Loss and Model Mismatch
    Zhang, Yanyu
    Li, Pengpeng
    Zhang, Xibeng
    Jiao, Feixiang
    Wang, Benfei
    Zhou, Yi
    Ukil, Abhisek
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2025,
  • [43] An intelligent control strategy for cancer cells reduction in patients with chronic myelogenous leukaemia using the reinforcement learning and considering side effects of the drug
    Noori, Amin
    Alfi, Alireza
    Noori, Ghazaleh
    EXPERT SYSTEMS, 2021, 38 (03)
  • [44] A data-driven tracking control framework using physics-informed neural networks and deep reinforcement learning for dynamical systems
    Faria, R. R.
    Capron, B. D. O.
    Secchi, A. R.
    De Souza, M. B.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [45] Autonomous Input Voltage Sharing Control and Triple Phase Shift Modulation Method for ISOP-DAB Converter in DC Microgrid: A Multiagent Deep Reinforcement Learning-Based Method
    Zeng, Yu
    Pou, Josep
    Sun, Changjiang
    Mukherjee, Suvajit
    Xu, Xu
    Gupta, Amit Kumar
    Dong, Jiaxin
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2023, 38 (03) : 2985 - 3000
  • [46] Data-driven coordinated control method for multiple systems in proton exchange membrane fuel cells using deep reinforcement learning
    Li, Jiawen
    Qian, Tiantian
    Yu, Tao
    ENERGY REPORTS, 2022, 8 : 290 - 311
  • [47] Optimal tracking control for completely unknown nonlinear discrete-time Markov jump systems using data-based reinforcement learning method
    Jiang, He
    Zhang, Huaguang
    Luo, Yanhong
    Wang, Junyi
    NEUROCOMPUTING, 2016, 194 : 176 - 182
  • [48] A Policy optimization-based Deep Reinforcement Learning method for data-driven output voltage control of grid connected solid oxide fuel cell considering operation constraints
    Zeng, Shunqi
    Huang, Chunyan
    Wang, Fei
    Li, Xin
    Chen, Minghui
    ENERGY REPORTS, 2023, 10 : 1161 - 1168
  • [49] Adaptive optimal output feedback tracking control for unknown discrete-time linear systems using a combined reinforcement Q-learning and internal model method
    Sun, Weijie
    Zhao, Guangyue
    Peng, Yunjian
    IET CONTROL THEORY AND APPLICATIONS, 2019, 13 (18) : 3075 - 3086