Teleconsultation dynamic scheduling with a deep reinforcement learning approach

被引:0
|
作者
Chen, Wenjia [1 ]
Li, Jinlin [2 ]
机构
[1] Beijing Informat Sci & Technol Univ, Sch Econ & Management, Beijing 100192, Peoples R China
[2] Beijing Inst Technol, Sch Management & Econ, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Teleconsultation scheduling; Markov decision process (MDP); Deep reinforcement learning; Deep Q-network (DQN); TELEMEDICINE; MODEL; OPTIMIZATION; UNCERTAINTY; DEMAND;
D O I
10.1016/j.artmed.2024.102806
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, the start time of teleconsultations is optimized for the clinical departments of class A tertiary hospitals to improve service quality and efficiency. For this purpose, first, a general teleconsultation scheduling model is formulated. In the formulation, the number of services (NS) is one of the objectives because of demand intermittency and service mobility. Demand intermittency means that demand has zero size in several periods. Service mobility means that specialists move between clinical departments and the National Telemedicine Center of China to provide the service. For problem -solving, the general model is converted into a Markov decision process (MDP) by elaborately defining the state, action, and reward. To solve the MDP, deep reinforcement learning (DRL) is applied to overcome the problem of inaccurate transition probability. To reduce the dimensions of the state-action space, a semi -fixed policy is developed and applied to the deep Q network (DQN) to construct an algorithm of the DQN with a semi -fixed policy (DQN-S). For efficient fitting, an early stop strategy is applied in DQN-S training. To verify the effectiveness of the proposed scheduling model and the model solving method DQN-S, scheduling experiments are carried out based on actual data of teleconsultation demand arrivals and service arrangements. The results show that DQN-S can improve the quality and efficiency of teleconsultations by reducing 9%-41% of the demand average waiting time, 3%-42% of the number of services, and 3%-33% of the total cost of services.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] Dynamic TT&C Mission Scheduling for Mega-Satellite Networks: A Deep Reinforcement Learning Approach
    Ma, Chenlu
    Zhou, Di
    Li, Haoran
    Li, Jiandong
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 7163 - 7168
  • [42] Dynamic Scheduling of Crane by Embedding Deep Reinforcement Learning into a Digital Twin Framework
    Xu, Zhenyu
    Chang, Daofang
    Sun, Miaomiao
    Luo, Tian
    INFORMATION, 2022, 13 (06)
  • [43] Deep Reinforcement Learning for Dynamic Task Scheduling in Edge-Cloud Environments
    Rani, D. Mamatha
    Supreethi, K. P.
    Jayasingh, Bipin Bihari
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2024, 15 (10) : 837 - 850
  • [44] QDRL: QoS-Aware Deep Reinforcement Learning Approach for Tor's Circuit Scheduling
    Basyoni, Lamiaa
    Erbad, Aiman
    Mohamed, Amr
    Guizani, Mohsen
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (05): : 3396 - 3410
  • [45] Dynamic Job-Shop Scheduling Based on Transformer and Deep Reinforcement Learning
    Song, Liyuan
    Li, Yuanyuan
    Xu, Jiacheng
    PROCESSES, 2023, 11 (12)
  • [46] Dynamic scheduling of a due date constrained flow shop with Deep Reinforcement Learning
    Marchesano, Maria Grazia
    Guizzi, Guido
    Popolo, Valentina
    Converso, Giuseppe
    IFAC PAPERSONLINE, 2022, 55 (10): : 2932 - 2937
  • [47] Dynamic flexible job shop scheduling algorithm based on deep reinforcement learning
    Zhao, Tianrui
    Wang, Yanhong
    Tan, Yuanyuan
    Zhang, Jun
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 5099 - 5104
  • [48] A transformer-based deep reinforcement learning approach for dynamic parallel machine scheduling problem with family setups
    Li, Funing
    Lang, Sebastian
    Tian, Yuan
    Hong, Bingyuan
    Rolf, Benjamin
    Noortwyck, Ruben
    Schulz, Robert
    Reggelin, Tobias
    JOURNAL OF INTELLIGENT MANUFACTURING, 2024,
  • [49] Toward Dynamic Resource Allocation and Client Scheduling in Hierarchical Federated Learning: A Two-Phase Deep Reinforcement Learning Approach
    Chen, Xiaojing
    Li, Zhenyuan
    Ni, Wei
    Wang, Xin
    Zhang, Shunqing
    Sun, Yanzan
    Xu, Shugong
    Pei, Qingqi
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (12) : 7798 - 7813
  • [50] Deep reinforcement learning for dynamic flexible job shop scheduling problem considering variable processing times
    Zhang, Lu
    Feng, Yi
    Xiao, Qinge
    Xu, Yunlang
    Li, Di
    Yang, Dongsheng
    Yang, Zhile
    JOURNAL OF MANUFACTURING SYSTEMS, 2023, 71 : 257 - 273