Reinforcement Learning Testbed for Power-Consumption Optimization

被引:58
作者
Moriyama, Takao [1 ]
De Magistris, Giovanni [1 ]
Tatsubori, Michiaki [1 ]
Pham, Tu-Hoa [1 ]
Munawar, Asim [1 ]
Tachibana, Ryuki [1 ]
机构
[1] IBM Res Tokyo, Tokyo, Japan
来源
METHODS AND APPLICATIONS FOR MODELING AND SIMULATION OF COMPLEX SYSTEMS | 2018年 / 946卷
关键词
Reinforcement learning; Power consumption; Data center;
D O I
10.1007/978-981-13-2853-4_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Common approaches to control a data-center cooling system rely on approximated system/environment models that are built upon the knowledge of mechanical cooling and electrical and thermal management. These models are difficult to design and often lead to suboptimal or unstable performance. In this paper, we show how deep reinforcement learning techniques can be used to control the cooling system of a simulated data center. In contrast to common control algorithms, those based on reinforcement learning techniques can optimize a system's performance automatically without the need of explicit model knowledge. Instead, only a reward signal needs to be designed. We evaluated the proposed algorithm on the open source simulation platform EnergyPlus. The experimental results indicate that we can achieve 22% improvement compared to a model-based control algorithm built into the EnergyPlus. To encourage the reproduction of our work as well as future research, we have also publicly released an open-source EnergyPlus wrapper interface (https://github.com/IBM/rl-testbed- for-energyplus) directly compatible with existing reinforcement learning frameworks.
引用
收藏
页码:45 / 59
页数:15
相关论文
共 14 条
[1]  
[Anonymous], 2017, OpenAI Baselines
[2]  
[Anonymous], 2015, REPORT DATA CTR ENER
[3]   EnergyPlus: creating a new-generation building energy simulation program [J].
Crawley, DB ;
Lawrie, LK ;
Winkelmann, FC ;
Buhl, WF ;
Huang, YJ ;
Pedersen, CO ;
Strand, RK ;
Liesen, RJ ;
Fisher, DE ;
Witte, MJ ;
Glazer, J .
ENERGY AND BUILDINGS, 2001, 33 (04) :319-331
[4]  
Heess N, 2017, EMERGENCE LOCOMOTION
[5]  
Inoue T., 2018, IEEE INT C IM PROC
[6]  
Inoue T., 2017, IEEE RSJ INT C INT R
[7]  
Li Y., 2017, TRANSFORMING COOLING
[8]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[9]  
Munawar A, 2018, IEEE INT CONF ROBOT, P527
[10]  
Pham T.H., 2018, IEEE INT C ROB AUT