A stochastic track maintenance scheduling model based on deep reinforcement learning approaches

被引:9
作者
Lee, Jun S. [1 ]
Yeo, In-Ho [1 ]
Bae, Younghoon [1 ]
机构
[1] Korea Railroad Res Inst, Uiwang Si, South Korea
关键词
Railway maintenance; Stochastic deterioration model; Deep reinforcement learning; Optimal scheduling; RAILWAY; OPTIMIZATION;
D O I
10.1016/j.ress.2023.109709
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
A data-driven railway track maintenance scheduling framework based on a stochastic track deterioration model and deep reinforcement learning approaches is proposed. Various track conditions such as track geometry and the support capacity of the infrastructure are considered in estimating the track deterioration rate and the track quality index obtained thereby is used to predict the state of each track segment. Further, the framework incorporates additional field-specific constraints including the number of tampings and the latest maintenance time of ballasted track are also introduced to account for the field conditions as accurately as possible. From these conditions, the optimal maintenance action for each track segment is determined based on the combined constraints of cost and ride comfort. In the present study, two reinforcement learning (RL) models, namely the Duel Deep Q Network (DuDQN) and Asynchronous Advantage Actor Critic (A3C) models, were employed to establish a decision support system of track maintenance, and the models' advantages and disadvantages were compared. Field application of the models was conducted based on field maintenance data, and the DuDQN model was found to be more suitable in our case. The optimal number of tampings before renewal was determined from the maintenance costs and field conditions, and the cost effect of ride comfort was investigated using the proposed deep RL model. Finally, possible improvements to the models were explored and are briefly outlined herein.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Deep Reinforcement Learning Based Task Scheduling in Edge Computing Networks
    Qi, Fan
    Li Zhuo
    Chen Xin
    2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 835 - 840
  • [42] Deep Reinforcement Learning-Based Workload Scheduling for Edge Computing
    Tao Zheng
    Jian Wan
    Jilin Zhang
    Congfeng Jiang
    Journal of Cloud Computing, 11
  • [43] Deep Reinforcement Learning-Based Workload Scheduling for Edge Computing
    Zheng, Tao
    Wan, Jian
    Zhang, Jilin
    Jiang, Congfeng
    JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2022, 11 (01):
  • [44] Dynamic flexible job shop scheduling based on deep reinforcement learning
    Yang, Dan
    Shu, Xiantao
    Yu, Zhen
    Lu, Guangtao
    Ji, Songlin
    Wang, Jiabing
    He, Kongde
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART B-JOURNAL OF ENGINEERING MANUFACTURE, 2024,
  • [45] Deep Reinforcement Learning for Scheduling in Cellular Networks
    Wang, Jian
    Xu, Chen
    Huangfu, Yourui
    Li, Rong
    Ge, Yiqun
    Wang, Jun
    2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2019,
  • [46] Deep Reinforcement Learning for Job Scheduling on Cluster
    Yao, Zhenjie
    Chen, Lan
    Zhang, He
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 613 - 624
  • [47] Multi-User Delay-Constrained Scheduling With Deep Recurrent Reinforcement Learning
    Hu, Pihe
    Chen, Yu
    Pan, Ling
    Fang, Zhixuan
    Xiao, Fu
    Huang, Longbo
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (03) : 2344 - 2359
  • [48] Model-Free Real-Time EV Charging Scheduling Based on Deep Reinforcement Learning
    Wan, Zhiqiang
    Li, Hepeng
    He, Haibo
    Prokhorov, Danil
    IEEE TRANSACTIONS ON SMART GRID, 2019, 10 (05) : 5246 - 5257
  • [49] Proposing a model based on deep reinforcement learning for real-time scheduling of collaborative customization remanufacturing
    Yazdanparast, Seyed Ali
    Zegordi, Seyed Hessameddin
    Khatibi, Toktam
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2025, 94
  • [50] FiDRL: Flexible Invocation-Based Deep Reinforcement Learning for DVFS Scheduling in Embedded Systems
    Li, Jingjin
    Jiang, Weixiong
    He, Yuting
    Yang, Qingyu
    Gao, Anqi
    Ha, Yajun
    Ozcan, Ender
    Bai, Ruibin
    Cui, Tianxiang
    Yu, Heng
    IEEE TRANSACTIONS ON COMPUTERS, 2025, 74 (01) : 71 - 85