Teleconsultation dynamic scheduling with a deep reinforcement learning approach

被引:0
|
作者
Chen, Wenjia [1 ]
Li, Jinlin [2 ]
机构
[1] Beijing Informat Sci & Technol Univ, Sch Econ & Management, Beijing 100192, Peoples R China
[2] Beijing Inst Technol, Sch Management & Econ, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Teleconsultation scheduling; Markov decision process (MDP); Deep reinforcement learning; Deep Q-network (DQN); TELEMEDICINE; MODEL; OPTIMIZATION; UNCERTAINTY; DEMAND;
D O I
10.1016/j.artmed.2024.102806
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, the start time of teleconsultations is optimized for the clinical departments of class A tertiary hospitals to improve service quality and efficiency. For this purpose, first, a general teleconsultation scheduling model is formulated. In the formulation, the number of services (NS) is one of the objectives because of demand intermittency and service mobility. Demand intermittency means that demand has zero size in several periods. Service mobility means that specialists move between clinical departments and the National Telemedicine Center of China to provide the service. For problem -solving, the general model is converted into a Markov decision process (MDP) by elaborately defining the state, action, and reward. To solve the MDP, deep reinforcement learning (DRL) is applied to overcome the problem of inaccurate transition probability. To reduce the dimensions of the state-action space, a semi -fixed policy is developed and applied to the deep Q network (DQN) to construct an algorithm of the DQN with a semi -fixed policy (DQN-S). For efficient fitting, an early stop strategy is applied in DQN-S training. To verify the effectiveness of the proposed scheduling model and the model solving method DQN-S, scheduling experiments are carried out based on actual data of teleconsultation demand arrivals and service arrangements. The results show that DQN-S can improve the quality and efficiency of teleconsultations by reducing 9%-41% of the demand average waiting time, 3%-42% of the number of services, and 3%-33% of the total cost of services.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach
    Zhang, Zixiao
    He, Fujun
    Oki, Eiji
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2023, E106B (07) : 557 - 570
  • [2] A deep reinforcement learning approach for dynamic task scheduling of flight tests
    Tian, Bei
    Xiao, Gang
    Shen, Yu
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (13) : 18761 - 18796
  • [3] A deep reinforcement learning approach for chemical production scheduling
    Hubbs, Christian D.
    Li, Can
    Sahinidis, Nikolaos, V
    Grossmann, Ignacio E.
    Wassick, John M.
    COMPUTERS & CHEMICAL ENGINEERING, 2020, 141
  • [4] A deep reinforcement learning based approach for dynamic distributed blocking flowshop scheduling with job insertions
    Sun, Xueyan
    Vogel-Heuser, Birgit
    Bi, Fandi
    Shen, Weiming
    IET COLLABORATIVE INTELLIGENT MANUFACTURING, 2022, 4 (03) : 166 - 180
  • [5] Dynamic scheduling for flexible job shop using a deep reinforcement learning approach
    Gui, Yong
    Tang, Dunbing
    Zhu, Haihua
    Zhang, Yi
    Zhang, Zequn
    COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 180
  • [6] Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning
    Alqahtani, Mohammed
    Hu, Mengqi
    ENERGY, 2022, 244
  • [7] Deep reinforcement learning for dynamic scheduling of energy-efficient automated guided vehicles
    Zhang, Lixiang
    Yan, Yan
    Hu, Yaoguang
    JOURNAL OF INTELLIGENT MANUFACTURING, 2024, 35 (08) : 3875 - 3888
  • [8] Shortening Passengers' Travel Time: A Dynamic Metro Train Scheduling Approach Using Deep Reinforcement Learning
    Wang, Zhaoyuan
    Pan, Zheyi
    Chen, Shun
    Ji, Shenggong
    Yi, Xiuwen
    Zhang, Junbo
    Wang, Jingyuan
    Gong, Zhiguo
    Li, Tianrui
    Zheng, Yu
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (05) : 5282 - 5295
  • [9] Deep learning and reinforcement learning approach on microgrid
    Chandrasekaran, Kumar
    Kandasamy, Prabaakaran
    Ramanathan, Srividhya
    INTERNATIONAL TRANSACTIONS ON ELECTRICAL ENERGY SYSTEMS, 2020, 30 (10):
  • [10] An improved deep reinforcement learning-based scheduling approach for dynamic task scheduling in cloud manufacturing
    Wang, Xiaohan
    Zhang, Lin
    Liu, Yongkui
    Laili, Yuanjun
    INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2024, 62 (11) : 4014 - 4030