Innovative energy solutions: Evaluating reinforcement learning algorithms for battery storage optimization in residential settings

被引:0
|
作者
Dou, Zhenlan [1 ]
Zhang, Chunyan [1 ]
Li, Junqiang [2 ]
Li, Dezhi [3 ]
Wang, Miao [3 ]
Sun, Lue [3 ]
Wang, Yong [2 ]
机构
[1] State Grid Shanghai Municipal Elect Power Co, Shanghai 200122, Peoples R China
[2] Nanchang Univ, Sch Informat Engn, Nanchang 330031, Peoples R China
[3] China Elect Power Res Inst, Beijing Key Lab Demand Side Multienergy Carriers O, Beijing 100192, Peoples R China
关键词
Reinforcement learning; Optimal controlling; Operation scheduling; Building energy Management; Energy storage; Solar PV system; SYSTEM; MANAGEMENT; OPERATION; BEHAVIOR; BIOMASS;
D O I
10.1016/j.psep.2024.09.123
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The implementation of BESS (battery energy storage systems) and the efficient optimization of their scheduling are crucial research challenges in effectively managing the intermittency and volatility of solar-PV (photovoltaic) systems. Nevertheless, an examination of the existing body of knowledge uncovers notable deficiencies in the ideal arrangement of energy systems' timetables. Most models primarily concentrate on a single aim, whereas only a few tackle the intricacies of multi-objective scenarios. This study examines homes connected to the power grid equipped with a BESS and a solar PV system. It leverages four distinct reinforcement learning (RL) algorithms, selected for their unique training methodologies, to develop effective scheduling models. The findings demonstrate that the RL model using Trust Region Policy Optimization (TRPO) effectively manages the BESS and PV system despite real-world uncertainties. This case study confirms the suitability and effectiveness of this approach. The TRPO-based RL framework surpasses previous models in decision-making by choosing the most optimal BESS scheduling strategies. The TRPO model exhibited the highest mean self-sufficiency rates compared to the A3C (Asynchronous Advantage Actor-Critic), DDPG (Deep Deterministic Policy Gradient), and TAC (Twin Actor Cretic) models, surpassing them by similar to 3%, 0.72%, and 3.5%, correspondingly. This results in enhanced autonomy and economic benefits by adapting to dynamic real-world conditions. Consequently, our approach was strategically designed to deliver an optimized outcome. This framework is primarily intended for seamless integration into an automated energy plant environment, facilitating regular electricity trading among multiple buildings. Backed by initiatives like the Renewable Energy Certificate weight, this technology is expected to play a crucial role in maintaining a balance between power generation and consumption. The MILP (Mixed Integer Linear Programming) architecture achieved a self-sufficiency rate of 29.12%, surpassing the rates of A3C, TRPO, DDPG, and TAC by 2.48%, 0.64%, 2%, and 3.04%, correspondingly.
引用
收藏
页码:2203 / 2221
页数:19
相关论文
共 50 条
  • [31] Reinforcement learning for fluctuation reduction of wind power with energy storage
    Yang, Zhen
    Ma, Xiaoteng
    Xia, Li
    Zhao, Qianchuan
    Guan, Xiaohong
    RESULTS IN CONTROL AND OPTIMIZATION, 2021, 4
  • [32] Optimizing Federated Reinforcement Learning Algorithm for Data Management of Distributed Energy Storage Network
    Li, Yuan
    Li, Yuancheng
    RECENT ADVANCES IN ELECTRICAL & ELECTRONIC ENGINEERING, 2024,
  • [33] Design of structured control policy for shared energy storage in residential community: A stochastic optimization approach
    Walker, Awnalisa
    Kwon, Soongeol
    APPLIED ENERGY, 2021, 298 (298)
  • [34] An innovative heterogeneous transfer learning framework to enhance the scalability of deep reinforcement learning controllers in buildings with integrated energy systems
    Coraci, Davide
    Brandi, Silvio
    Hong, Tianzhen
    Capozzoli, Alfonso
    BUILDING SIMULATION, 2024, 17 (05) : 739 - 770
  • [35] A bi-level reinforcement learning model for optimal scheduling and planning of battery energy storage considering uncertainty in the energy-sharing community
    Kang, Hyuna
    Jung, Seunghoon
    Jeoung, Jaewon
    Hong, Juwon
    Hong, Taehoon
    SUSTAINABLE CITIES AND SOCIETY, 2023, 94
  • [36] Optimization algorithms for energy storage integrated microgrid performance enhancement
    Roslan, M. F.
    Hannan, M. A.
    Ker, P. J.
    Muttaqi, K. M.
    Mahlia, T. M., I
    JOURNAL OF ENERGY STORAGE, 2021, 43
  • [37] Definition and Application of Innovative Control Logics for Residential Energy Optimization
    Ippolito, M. G.
    Zizzo, G.
    Piccolo, A.
    Siano, P.
    2014 INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS, ELECTRICAL DRIVES, AUTOMATION AND MOTION (SPEEDAM), 2014, : 1272 - 1277
  • [38] Transactive Energy Trading of Residential Prosumers Using Battery Energy Storage Systems
    Nizami, M. S. H.
    Hossain, M. J.
    Amin, B. M. Ruhul
    Kashif, Muhammad
    Fernandez, Edstan
    Mahmud, Khizir
    2019 IEEE MILAN POWERTECH, 2019,
  • [39] Hybrid Energy Storage System Optimization With Battery Charging and Swapping Coordination
    Chen, Xinjiang
    Yang, Yu
    Song, Jie
    Wang, Jianxiao
    He, Guannan
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (03) : 4094 - 4105
  • [40] Supervised Optimization Framework for Charging and Discharging Controls of Battery Energy Storage
    Lee, Jaehwan
    Kwon, Soongeol
    IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (06) : 5610 - 5621