Real-Time Demand Response Management for Controlling Load Using Deep Reinforcement Learning

被引:0
作者
Zhao, Yongjiang [1 ]
Yoo, Jae Hung [1 ]
Lim, Chang Gyoon [1 ]
机构
[1] Chonnam Natl Univ, Dept Comp Engn, Yeosu 59626, South Korea
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2022年 / 73卷 / 03期
关键词
Demand response; controlling load; SAC; CityLearn;
D O I
10.32604/cmc.2022.027443
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid economic growth and improved living standards, electricity has become an indispensable energy source in our lives. Therefore, the stability of the grid power supply and the conservation of electricity is critical. The following are some of the problems facing now: 1) During the peak power consumption period, it will pose a threat to the power grid. Enhancing and improving the power distribution infrastructure requires high maintenance costs. 2) The user's electricity schedule is unreasonable due to personal behavior, which will cause a waste of electricity. Controlling load as a vital part of incentive demand response (DR) can achieve rapid response and improve demand-side resilience. Maintaining load by manually formulating rules, some devices are selective to be adjusted during peak power consumption. However, it is challenging to optimize methods based on manual rules. This paper uses Soft Actor-Critic (SAC) as a control algorithm to optimize the control strategy. The results show that through the coordination of the SAC to control load in CityLearn, realizes the goal of reducing both the peak load demand and the operation costs on the premise of regulating voltage to the safe limit.
引用
收藏
页码:5671 / 5686
页数:16
相关论文
共 25 条
[1]  
[Anonymous], 2016, CoRR, abs/1509.02971
[2]   Online scheduling of plug-in vehicles in dynamic pricing schemes [J].
Arif, A. I. ;
Babar, M. ;
Ahamed, T. P. Imthias ;
Al-Ammar, E. A. ;
Nguyen, P. H. ;
Kamphuis, I. G. Rene ;
Malik, N. H. .
SUSTAINABLE ENERGY GRIDS & NETWORKS, 2016, 7 :25-36
[3]  
Dey S., 2020, arXiv
[4]   A two-layer networked learning control system using actor-critic neural network [J].
Du, Dajun ;
Fei, Minrui .
APPLIED MATHEMATICS AND COMPUTATION, 2008, 205 (01) :26-36
[5]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[6]  
Guan CX, 2015, CONSUM COMM NETWORK, P637, DOI 10.1109/CCNC.2015.7158054
[7]  
Haarnoja T., 2018, arXiv
[8]  
Haarnoja T, 2018, PR MACH LEARN RES, V80
[9]   State-of-the-art on research and applications of machine learning in the building life cycle [J].
Hong, Tianzhen ;
Wang, Zhe ;
Luo, Xuan ;
Zhang, Wanni .
ENERGY AND BUILDINGS, 2020, 212
[10]   Energy Management Strategy for a Hybrid Electric Vehicle Based on Deep Reinforcement Learning [J].
Hu, Yue ;
Li, Weimin ;
Xu, Kun ;
Zahid, Taimoor ;
Qin, Feiyan ;
Li, Chenming .
APPLIED SCIENCES-BASEL, 2018, 8 (02)