Real-Time Demand Response Management for Controlling Load Using Deep Reinforcement Learning

被引:0
作者
Zhao, Yongjiang [1 ]
Yoo, Jae Hung [1 ]
Lim, Chang Gyoon [1 ]
机构
[1] Chonnam Natl Univ, Dept Comp Engn, Yeosu 59626, South Korea
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2022年 / 73卷 / 03期
关键词
Demand response; controlling load; SAC; CityLearn;
D O I
10.32604/cmc.2022.027443
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid economic growth and improved living standards, electricity has become an indispensable energy source in our lives. Therefore, the stability of the grid power supply and the conservation of electricity is critical. The following are some of the problems facing now: 1) During the peak power consumption period, it will pose a threat to the power grid. Enhancing and improving the power distribution infrastructure requires high maintenance costs. 2) The user's electricity schedule is unreasonable due to personal behavior, which will cause a waste of electricity. Controlling load as a vital part of incentive demand response (DR) can achieve rapid response and improve demand-side resilience. Maintaining load by manually formulating rules, some devices are selective to be adjusted during peak power consumption. However, it is challenging to optimize methods based on manual rules. This paper uses Soft Actor-Critic (SAC) as a control algorithm to optimize the control strategy. The results show that through the coordination of the SAC to control load in CityLearn, realizes the goal of reducing both the peak load demand and the operation costs on the premise of regulating voltage to the safe limit.
引用
收藏
页码:5671 / 5686
页数:16
相关论文
共 25 条
[21]  
Soares A., 2017, 2017 IEEE PES INNOVA, P1
[22]   A Survey on Demand Response Programs in Smart Grids: Pricing Methods and Optimization Algorithms [J].
Vardakas, John S. ;
Zorba, Nizar ;
Verikoukis, Christos V. .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2015, 17 (01) :152-178
[23]   Reinforcement learning for demand response: A review of algorithms and modeling techniques [J].
Vazquez-Canteli, Jose R. ;
Nagy, Zoltan .
APPLIED ENERGY, 2019, 235 :1072-1089
[24]   Deep Reinforcement Learning for Smart Home Energy Management [J].
Yu, Liang ;
Xie, Weiwei ;
Xie, Di ;
Zou, Yulong ;
Zhang, Dengyin ;
Sun, Zhixin ;
Zhang, Linghua ;
Zhang, Yue ;
Jiang, Tao .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (04) :2751-2762
[25]   A Cooperative Multi-Agent Deep Reinforcement Learning Framework for Real-Time Residential Load Scheduling [J].
Zhang, Chi ;
Kuppannagari, Sanmukh R. ;
Xiong, Chuanxiu ;
Kannan, Rajgopal ;
Prasanna, Viktor K. .
PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON INTERNET OF THINGS DESIGN AND IMPLEMENTATION (IOTDI '19), 2019, :59-69