Teleconsultation dynamic scheduling with a deep reinforcement learning approach

被引:0
|
作者
Chen, Wenjia [1 ]
Li, Jinlin [2 ]
机构
[1] Beijing Informat Sci & Technol Univ, Sch Econ & Management, Beijing 100192, Peoples R China
[2] Beijing Inst Technol, Sch Management & Econ, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Teleconsultation scheduling; Markov decision process (MDP); Deep reinforcement learning; Deep Q-network (DQN); TELEMEDICINE; MODEL; OPTIMIZATION; UNCERTAINTY; DEMAND;
D O I
10.1016/j.artmed.2024.102806
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, the start time of teleconsultations is optimized for the clinical departments of class A tertiary hospitals to improve service quality and efficiency. For this purpose, first, a general teleconsultation scheduling model is formulated. In the formulation, the number of services (NS) is one of the objectives because of demand intermittency and service mobility. Demand intermittency means that demand has zero size in several periods. Service mobility means that specialists move between clinical departments and the National Telemedicine Center of China to provide the service. For problem -solving, the general model is converted into a Markov decision process (MDP) by elaborately defining the state, action, and reward. To solve the MDP, deep reinforcement learning (DRL) is applied to overcome the problem of inaccurate transition probability. To reduce the dimensions of the state-action space, a semi -fixed policy is developed and applied to the deep Q network (DQN) to construct an algorithm of the DQN with a semi -fixed policy (DQN-S). For efficient fitting, an early stop strategy is applied in DQN-S training. To verify the effectiveness of the proposed scheduling model and the model solving method DQN-S, scheduling experiments are carried out based on actual data of teleconsultation demand arrivals and service arrangements. The results show that DQN-S can improve the quality and efficiency of teleconsultations by reducing 9%-41% of the demand average waiting time, 3%-42% of the number of services, and 3%-33% of the total cost of services.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Intelligent Decision-Making of Scheduling for Dynamic Permutation Flowshop via Deep Reinforcement Learning
    Yang, Shengluo
    Xu, Zhigang
    Wang, Junyi
    SENSORS, 2021, 21 (03) : 1 - 21
  • [32] Dynamic scheduling for multi-objective flexible job shop via deep reinforcement learning
    Yuan, Erdong
    Wang, Liejun
    Song, Shiji
    Cheng, Shuli
    Fan, Wei
    APPLIED SOFT COMPUTING, 2025, 171
  • [33] A HIERARCHICAL DEEP REINFORCEMENT LEARNING APPROACH FOR OUTPATIENT PRIMARY CARE SCHEDULING
    Issabakhsh, Mona
    Lee, Seokgi
    2022 WINTER SIMULATION CONFERENCE (WSC), 2022, : 997 - 1008
  • [34] A deep reinforcement learning-based approach for the residential appliances scheduling
    Li, Sichen
    Cao, Di
    Huang, Qi
    Zhang, Zhenyuan
    Chen, Zhe
    Blaabjerg, Frede
    Hu, Weihao
    ENERGY REPORTS, 2022, 8 : 1034 - 1042
  • [35] A Deep Reinforcement Learning Approach to Dynamic Loading Strategy of Repairable Multistate Systems
    Chen, Yiming
    Liu, Yu
    Xiahou, Tangfan
    IEEE TRANSACTIONS ON RELIABILITY, 2022, 71 (01) : 484 - 499
  • [36] Deep Reinforcement Learning Approach for Resource-Constrained Project Scheduling
    Zhao, Xiaohan
    Song, Wen
    Li, Qiqiang
    Shi, Huadong
    Kang, Zhichao
    Zhang, Chunmei
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 1226 - 1234
  • [37] A Deep Reinforcement Learning Approach for Production Scheduling with the Use of Dispatch Rules
    Mavrothalassitis, Panagiotis
    Bakopoulos, Emmanouil
    Siatras, Vasilis
    Nikolakis, Nikolaos
    Alexopoulos, Kosmas
    ADVANCES IN ARTIFICIAL INTELLIGENCE IN MANUFACTURING, ESAIM 2023, 2024, : 43 - 50
  • [38] Dynamic Job-Shop Scheduling via Graph Attention Networks and Deep Reinforcement Learning
    Liu, Chien-Liang
    Tseng, Chun-Jan
    Weng, Po-Hao
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (06) : 8662 - 8672
  • [39] Dynamic Approach for Object Detection using Deep Reinforcement Learning
    Borkar, Sheetal
    Singh, Upasna
    Soumya, S.
    2024 IEEE SPACE, AEROSPACE AND DEFENCE CONFERENCE, SPACE 2024, 2024, : 393 - 397
  • [40] Deep reinforcement learning for dynamic distributed job shop scheduling problem with transfers
    Lei, Yong
    Deng, Qianwang
    Liao, Mengqi
    Gao, Shuocheng
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 251