Optimized Energy Dispatch for Microgrids With Distributed Reinforcement Learning

被引:0
作者
Wang, Yusen [1 ]
Xiao, Ming [1 ]
You, Yang [1 ]
Poor, H. Vincent [2 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
[2] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
关键词
Costs; Generators; Energy management; Power generation; Microgrids; Uncertainty; Batteries; Reinforcement learning; distributed optimization; energy dispatch problem; stochastic ADMM;
D O I
10.1109/TSG.2023.3331467
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The increasing integration of renewable energy resources (RES) introduces uncertainties in modern power systems and makes the dynamic energy dispatch (DED) problem challenging. Uncertainties lead to dynamic grid control, which needs to be addressed for the optimized DED. Moreover, since energy usage and power generation are distributed, multiple parties can be involved in the DED problem. Thus, DED should be optimized in a distributed way for efficiency and privacy. With the development of the Internet of Things (IoT) and machine learning technology, various data can be gathered and analyzed to achieve intelligent energy management, and the dynamics of power grids should be considered for optimality. For this purpose, we investigate how reinforcement learning can be used to solve the DED problem for a dynamic microgrid (MG) environment. The objective is to determine the optimal power generation for each generator using fossil fuels at each time slot, to minimize the cumulative cost of power generation in a given time period. To achieve this goal, we first model the MG with the practical impact of batteries, photovoltaic (PV) panels, and load banks (external grids). Then we formulate the optimization problem of minimizing the total generation from fossil fuels. To solve this problem, we propose a distributed reinforcement learning algorithm to reduce communication costs and improve data privacy. In the proposed scheme, each generator is considered as an agent, which shares a global state and only obtains its own local loss. Then, different agents work jointly to minimize the global cost. Theoretical analysis is provided to prove the convergence of the proposed algorithms, which are also tested with real-world datasets. Results show that the policy learned from the proposed algorithms can balance the production and consumption in the MG for both fully and partially observable MG environments while simultaneously reducing the total generation cost from fossil fuels.
引用
收藏
页码:2946 / 2956
页数:11
相关论文
共 30 条
  • [1] Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review
    Arwa, Erick O.
    Folly, Komla A.
    [J]. IEEE ACCESS, 2020, 8 : 208992 - 209007
  • [2] Infinite-horizon policy-gradient estimation
    Baxter, J
    Bartlett, PL
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2001, 15 : 319 - 350
  • [3] Distributed Reinforcement Learning Algorithm for Dynamic Economic Dispatch With Unknown Generation Cost Functions
    Dai, Pengcheng
    Yu, Wenwu
    Wen, Guanghui
    Baldi, Simone
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (04) : 2258 - 2267
  • [4] Coordinated Energy Dispatch of Autonomous Microgrids With Distributed MPC Optimization
    Du, Yigao
    Wu, Jing
    Li, Shaoyuan
    Long, Chengnian
    Onori, Simona
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (09) : 5289 - 5298
  • [5] A Stochastic Multi-Objective Framework for Optimal Scheduling of Energy Storage Systems in Microgrids
    Farzin, Hossein
    Fotuhi-Firuzabad, Mahmud
    Moeini-Aghtaie, Moein
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2017, 8 (01) : 117 - 127
  • [6] Energy-Efficient Buildings Facilitated by Microgrid
    Guan, Xiaohong
    Xu, Zhanbo
    Jia, Qing-Shan
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2010, 1 (03) : 243 - 252
  • [7] Jhana N., 2019, Hourly energy demand generation and weather-kaggle
  • [8] Dynamic Pricing and Energy Consumption Scheduling With Reinforcement Learning
    Kim, Byung-Gook
    Zhang, Yu
    van der Schaar, Mihaela
    Lee, Jang-Won
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2016, 7 (05) : 2187 - 2198
  • [9] Konda VR, 2000, ADV NEUR IN, V12, P1008
  • [10] Dynamic Energy Dispatch Based on Deep Reinforcement Learning in IoT-Driven Smart Isolated Microgrids
    Lei, Lei
    Tan, Yue
    Dahlenburg, Glenn
    Xiang, Wei
    Zheng, Kan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (10): : 7938 - 7953