Deep reinforcement learning towards real-world dynamic thermal management of data centers

被引:17
作者
Zhang, Qingang [1 ]
Zeng, Wei [1 ]
Lin, Qinjie [1 ]
Chng, Chin-Boon [1 ]
Chui, Chee-Kong [1 ]
Lee, Poh-Seng [1 ]
机构
[1] Natl Univ Singapore, Dept Mech Engn, Singapore 117575, Singapore
关键词
Data Center; Dynamic Thermal Management; Deep Reinforcement Learning; Machine Learning; POWER-CONSUMPTION; HVAC CONTROL; SYSTEMS; MODEL;
D O I
10.1016/j.apenergy.2022.120561
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Deep Reinforcement Learning has been increasingly researched for Dynamic Thermal Management in Data Centers. However, existing works typically evaluate the performance of algorithms on a specific task, utilizing models or data trajectories without discussing in detail their implementation feasibility and their ability to deal with diverse work scenarios. The lack of these works limits the real-world deployment of Deep Reinforcement Learning. To this end, this paper comprehensively evaluates the strengths and limitations of state-of-the-art algorithms by conducting analytical and numerical studies. The analysis is conducted in four dimensions: al-gorithms, tasks, system dynamics, and knowledge transfer. As an inherent property, the sensitivity to algorithms settings is first evaluated in a simulated data center model. The ability to deal with various tasks and the sensitivity to reward functions are subsequently studied. The trade-off between constraints and power savings is identified by conducting ablation experiments. Next, the performance under different work scenarios is inves-tigated, including various equipment, workload schedules, locations, and power densities. Finally, the trans-ferability of algorithms across tasks and scenarios is also evaluated. The results show that actor-critic, off-policy, and model-based algorithms outperform others in optimality, robustness, and transferability. They can reduce violations and achieve around 8.84% power savings in some scenarios compared to the default controller. However, deploying these algorithms in real-world systems is challenging since they are sensitive to specific hyperparameters, reward functions, and work scenarios. Constraint violations and sample efficiency are some aspects that are still unsatisfactory. This paper presents our well-structured investigations, new findings, and challenges when deploying deep reinforcement learning in Data Centers.
引用
收藏
页数:24
相关论文
共 73 条
  • [1] Achiam J., 2018, SPINNING DEEP REINFO
  • [2] Effect of number of neurons and layers in an artificial neural network for generalized concrete mix design
    Adil, Mohammad
    Ullah, Rahat
    Noor, Salma
    Gohar, Neelam
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (11) : 8355 - 8363
  • [3] Modeling techniques used in building HVAC control systems: A review
    Afroz, Zakia
    Shafiullah, G. M.
    Urmee, Tania
    Higgins, Gary
    [J]. RENEWABLE & SUSTAINABLE ENERGY REVIEWS, 2018, 83 : 64 - 84
  • [4] A survey of inverse reinforcement learning: Challenges, methods and progress
    Arora, Saurabh
    Doshi, Prashant
    [J]. ARTIFICIAL INTELLIGENCE, 2021, 297 (297)
  • [5] Deep Reinforcement Learning A brief survey
    Arulkumaran, Kai
    Deisenroth, Marc Peter
    Brundage, Miles
    Bharath, Anil Anthony
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) : 26 - 38
  • [6] ASHRAE, 2015, THERM GUID DAT PROC
  • [7] Beitelmal M.H., 2009, ASME 2009 InterPACK conference collocated with the ASME 2009 summer heat transfer conference and the ASME 2009 3rd international conference on energy sustainability, P645
  • [8] Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control
    Biemann, Marco
    Scheller, Fabian
    Liu, Xiufeng
    Huang, Lizhen
    [J]. APPLIED ENERGY, 2021, 298
  • [9] Viability of dynamic cooling control in a data center environment
    Boucher, Timothy D.
    Auslander, David M.
    Bash, Cullen E.
    Federspiel, Clifford C.
    Patel, Chandrakant D.
    [J]. JOURNAL OF ELECTRONIC PACKAGING, 2006, 128 (02) : 137 - 144
  • [10] Ce Chi, 2020, e-Energy '20: Proceedings of the Eleventh ACM International Conference on Future Energy Systems, P489, DOI 10.1145/3396851.3402658