Research on overall energy consumption optimization method for data center based on deep reinforcement learning

被引:1
|
作者
Wang Simin [1 ,2 ]
Qin Lulu [1 ]
Ma Chunmiao [1 ]
Wu Weiguo [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Comp Sci & Technol, Xian, Peoples R China
[2] Xian Polytech Univ, Xian, Peoples R China
关键词
Energy consumption; data center; job scheduling; cooling system; deep reinforcement learning; multi-agent system; DVFS;
D O I
10.3233/JIFS-223769
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid development of cloud computing, there are more and more large-scale data centers, which makes the energy management of data centers more complex. In order to achieve better energy-saving effect, it is necessary to solve the problems of concurrent management and interdependence of IT, refrigeration, storage, and network equipment. Reinforcement learning learns by interacting with the environment, which is a good way to realize the independent management of the data center. In this paper, a overall energy consumption method for data center based on deep reinforcement learning is proposed to achieve collaborative energy saving of data center task scheduling and refrigeration equipment. A new multi-agent architecture is proposed to separate the training process from the execution process, simplify the interaction process during system operation and improve the operation effect. In the deep learning stage, a hybrid deep Q network algorithm is proposed to optimize the joint action value function of the data center and obtain the optimal strategy. Experiments show that compared with other reinforcement learning methods, our method can not only reduce the energy consumption of the data center, but also reduce the frequency of hot spots.
引用
收藏
页码:7333 / 7349
页数:17
相关论文
共 50 条
  • [21] A Deep Learning Method for Monitoring Vehicle Energy Consumption with GPS Data
    Ko, Kwangho
    Lee, Tongwon
    Jeong, Seunghyun
    SUSTAINABILITY, 2021, 13 (20)
  • [22] Optimization method of CNC milling parameters based on deep reinforcement learning
    Deng Q.-L.
    Lu J.
    Chen Y.-H.
    Feng J.
    Liao X.-P.
    Ma J.-Y.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2022, 56 (11): : 2145 - 2155
  • [23] Research and Application of Predictive Control Method Based on Deep Reinforcement Learning for HVAC Systems
    Fu, Chenhui
    Zhang, Yunhua
    IEEE ACCESS, 2021, 9 (130845-130852): : 130845 - 130852
  • [24] Energy Management and Optimization of Multi-energy Grid Based on Deep Reinforcement Learning
    Liu J.
    Chen J.
    Wang X.
    Zeng J.
    Huang Q.
    Dianwang Jishu/Power System Technology, 2020, 44 (10): : 3794 - 3803
  • [25] Research on Robot Intelligent Control Method Based on Deep Reinforcement Learning
    Rao, Shu
    2022 6TH INTERNATIONAL SYMPOSIUM ON COMPUTER SCIENCE AND INTELLIGENT CONTROL, ISCSIC, 2022, : 221 - 225
  • [26] A Typhoon Center Location Method on Satellite Images Based on Deep Reinforcement Learning
    Wang, Ping
    Yang, Xin
    Ji, Zhong
    Hou, Jinyi
    Wang, Cong
    Chen, Haoyi
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 7046 - 7053
  • [27] Machine Learning-based Energy Consumption Model for Data Center
    Qiao, Lin
    Yu, Yuanqi
    Wang, Qun
    Zhang, Yu
    Song, Yueming
    Yu, Xiaosheng
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 3051 - 3055
  • [28] Balancing Energy Consumption and Thermal Comfort with Deep Reinforcement Learning
    Cicirelli, Franco
    Guerrieri, Antonio
    Mastroianni, Carlo
    Scarcello, Luigi
    Spezzano, Giandomenico
    Vinci, Andrea
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2021, : 295 - 300
  • [29] An Optimization Method for Non-IID Federated Learning Based on Deep Reinforcement Learning
    Meng, Xutao
    Li, Yong
    Lu, Jianchao
    Ren, Xianglin
    SENSORS, 2023, 23 (22)
  • [30] A Integrated Energy Service Channel Optimization Mechanism Based on Deep Reinforcement Learning
    Ma Q.-L.
    Yu P.
    Wu J.-H.
    Xiong A.
    Yan Y.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2020, 43 (02): : 87 - 93