A deep reinforcement learning-based approach for the residential appliances scheduling

被引:9
作者
Li, Sichen [1 ]
Cao, Di [1 ]
Huang, Qi [1 ,2 ]
Zhang, Zhenyuan [1 ]
Chen, Zhe [3 ]
Blaabjerg, Frede [3 ]
Hu, Weihao [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu, Peoples R China
[2] Chengdu Univ Technol, Sch Energy, Chengdu, Peoples R China
[3] Aalborg Univ, Dept Energy Technol, Aalborg, Denmark
关键词
Demand response; Residential appliances scheduling; Deep reinforcement learning; LEVEL CONTROL; MANAGEMENT;
D O I
10.1016/j.egyr.2022.02.181
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This paper investigates the optimal real-time residential appliances scheduling of individual owner when participating in the demand response (DR) program. The proposed method is novel since we cast the optimization problem to an intelligent deep reinforcement learning (DRL) framework, which avoids solving a specific optimization model directly when facing dynamic operation conditions induced by the outdoor temperature, electricity price and resident's behavior. We consider the scheduling of power-shiftable, time-shiftable and deferrable appliances for the optimization of profit and satisfaction rate of resident. The optimization problem is first modeled as a Markov decision process and then solved by a model-free entropy-based DRL algorithm. Unlike traditional model-based methods which rely on accurate knowledge of parameters and physical models that are difficult to obtain in practice, the proposed method can develop real-time near-optimal control behavior by interacting with the environment and learning from data, which avoids the error caused by the simplification and assumption when building physical model. The proposed scheduling algorithm also achieves better tradeoff between the profit and the satisfaction rate than deterministic DRL algorithm owing to the introduction of the entropy term. Simulation results using real-world data demonstrate the effectiveness of the proposed method. (c) 2022 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the scientific committee of the 2021 The 2nd International Conference on Power Engineering, ICPE, 2021.
引用
收藏
页码:1034 / 1042
页数:9
相关论文
共 24 条
  • [1] Ahrarinouri M, 2020, IEEE T IND INF
  • [2] Optimal Smart Home Energy Management Considering Energy Saving and a Comfortable Lifestyle
    Anvari-Moghaddam, Amjad
    Monsef, Hassan
    Rahimi-Kian, Ashkan
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2015, 6 (01) : 324 - 332
  • [3] Tapping the energy storage potential in electric loads to deliver load following and regulation, with application to wind energy
    Callaway, Duncan S.
    [J]. ENERGY CONVERSION AND MANAGEMENT, 2009, 50 (05) : 1389 - 1400
  • [4] Reinforcement Learning and Its Applications in Modern Power and Energy Systems: A Review
    Cao, Di
    Hu, Weihao
    Zhao, Junbo
    Zhang, Guozhou
    Zhang, Bin
    Liu, Zhou
    Chen, Zhe
    Blaabjerg, Frede
    [J]. JOURNAL OF MODERN POWER SYSTEMS AND CLEAN ENERGY, 2020, 8 (06) : 1029 - 1042
  • [5] A Multi-Agent Deep Reinforcement Learning Based Voltage Regulation Using Coordinated PV Inverters
    Cao, Di
    Hu, Weihao
    Zhao, Junbo
    Huang, Qi
    Chen, Zhe
    Blaabjerg, Frede
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2020, 35 (05) : 4120 - 4123
  • [6] Fujimoto S, 2018, PR MACH LEARN RES, V80
  • [7] Haarnoja T, 2018, PR MACH LEARN RES, V80
  • [8] A self-learning scheme for residential energy system control and management
    Huang, Ting
    Liu, Derong
    [J]. NEURAL COMPUTING & APPLICATIONS, 2013, 22 (02) : 259 - 269
  • [9] Chance Constrained Optimization in a Home Energy Management System
    Huang, Yantai
    Wang, Lei
    Guo, Weian
    Kang, Qi
    Wu, Qidi
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2018, 9 (01) : 252 - 260
  • [10] Low-Level Control of a Quadrotor With Deep Model-Based Reinforcement Learning
    Lambert, Nathan O.
    Drewe, Daniel S.
    Yaconelli, Joseph
    Levine, Sergey
    Calandra, Roberto
    Pister, Kristofer S. J.
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04) : 4224 - 4230