Graph reinforcement learning for real-time optimal dispatch of active distribution network

被引:0
|
作者
Chen J.-B. [1 ]
Yu T. [1 ]
Pan Z.-N. [1 ]
机构
[1] School of Electric Power Engineering, South China University of Technology
[2] Guangzhou Provincial Key Laboratory of Intelligent Measurement and Advanced Metering of Power Grid
来源
Kongzhi Lilun Yu Yingyong/Control Theory and Applications | 2024年 / 41卷 / 06期
关键词
activate distribution network; graph neural network; graph reinforcement learning; graph representation; real-time optimal dispatch;
D O I
10.7641/CTA.2023.30091
中图分类号
学科分类号
摘要
The renewable energy system, energy storage system and other energy resources of active distribution network can effectively improve the flexibility and reliability of operation. Meanwhile, renewable energy and load also bring uncertainty to the distribution network, resulting in large dimensions of real-time optimal dispatch and poor modeling accuracy of active distribution network. To solve this problem, a graph reinforcement learning method combining graph neural network and reinforcement learning is proposed to avoid accurate modeling of complex systems. Firstly, the real-time optimal dispatch is described as Markov decision process and dynamic sequential decision problem. Secondly, a graph representation method based on the physical connection is proposed to express the implied correlation of state variables. Then a graph reinforcement learning is proposed to learn the optimal strategy for mapping system state graph to decision output. Finally, the graph reinforcement learning is developed to distributed graph reinforcement learning. The simulations show that graph reinforcement learning achieves better results in optimality and efficiency. © 2024 South China University of Technology. All rights reserved.
引用
收藏
页码:999 / 1008
页数:9
相关论文
共 28 条
  • [1] FENG L, LIU J J, LU H T, Et al., Robust operation of distribution network based on photovoltaic/wind energy resources in condition of COVID-19 pandemic considering deterministic and probabilistic approaches, Energy, 261, 8, (2022)
  • [2] LI Y F, SUN B, DONG S M, Et al., Active distribution network active and reactive power coordinated dispatching method based on discrete monkey algorithm, International Journal of Electrical Power & Energy Systems, 143, (2022)
  • [3] HUANG Zhanghao, ZHANG Yachao, ZHENG Feng, Et al., Day-ahead and real-time energy management for active distribution network based on coordinated optimization of different stakeholders, Power System Technology, 45, 6, pp. 2299-2307, (2021)
  • [4] HAN X Y, MU C X, NIU Z Y., An autonomous control technology based on deep reinforcement learning for optimal active power dispatch, International Journal of Electrical Power & Energy Systems, 145, (2023)
  • [5] GU W, WANG Z H, WU Z, Et al., An online optimal dispatch schedule for CCHP microgrids based model predictive control, IEEE Transactions on Smart Grid, 8, 5, pp. 2332-2342, (2017)
  • [6] PAN Z N, YU T, LI J, Et al., Multi-agent Learning based nearly non-iterative stochastic dynamic transactive energy control of networked microgrids, IEEE Transactions on Smart Grid, 13, 1, pp. 688-701, (2022)
  • [7] XU B W, GUO F H, ZHANG W A, Et al., E-2 DNet: An ensembling deep neural network or solving nonconvex economic dispatch in smart grid, IEEE Transactions on Industrial Informatics, 18, 5, pp. 3066-3076, (2022)
  • [8] PENG Liuyang, SUN Yuanzhang, XU Jian, Et al., Self-adaptive uncertainty economic dispatch based on deep reinforcement learning, Automation of Electric Power System, 44, 9, pp. 33-42, (2020)
  • [9] LI Yanying, XI Lei, GUO Yiguo, Et al., Automatic generation control based on the weighted double Q-delayed update learning algorithm, Proceedings of the CSEE, 42, 15, pp. 5459-5470, (2022)
  • [10] KIRAN B R, SOBH I, TALPAERT V, Et al., Deep reinforcement learning for autonomous driving: A survey, IEEE Transactions on Intelligent Transportation Systems, 23, 6, pp. 4909-4926, (2022)