A scalable graph reinforcement learning algorithm based stochastic dynamic dispatch of power system under high penetration of renewable energy

被引:20
作者
Chen, Junbin
Yu, Tao
Pan, Zhenning [1 ]
Zhang, Mengyue
Deng, Bairong
机构
[1] South China Univ Technol, Coll Elect Power, Guangzhou 510640, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Renewable energy source; Graph reinforcement learning; Dynamic economic dispatch; Graph-based representation; Scalability; ECONOMIC-DISPATCH; STRATEGY; NETWORK; FLOW;
D O I
10.1016/j.ijepes.2023.109212
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the increasing penetration of renewable energy source, power system is facing significant uncertainties. How to fully consider those uncertainties in dynamic economic dispatch (DED) has become a crucial problem to the safe and economic operation of power system. Reinforcement learning (RL) based approaches can provide the dispatch policy in response to uncertainty. However, current RL uses traditional Euclidean data representation, which greatly reduces the scalability and computational efficiency of economic dispatch algorithms. To address such obstacle, this paper develops a novel graph reinforcement learning (GRL) method for DED of power system. Firstly, DED is formulated as a Markov decision process and formulated as a dynamic sequential decision problem. Secondly, a novel graph-based representation of system state is proposed. Using graph data to represent dispatch operation data with non-Euclidean characteristics can effectively capture the implicit correlation of the uncertainty corresponding to the system topology. Thirdly, a GRL algorithm is proposed to learn the optimal policy which maps graph represented system state to DED decision. Compared with the traditional deep rein-forcement learning (DRL), the GRL proposed in this paper has certain generalization ability and scalability, and can achieve higher quality solutions in online operation. Case studies illustrate that the optimality of the pro-posed method is 98.04%, which is 15.13% higher than that of the existing learning methods. The algorithm is scalable and improves the sample efficiency.
引用
收藏
页数:10
相关论文
共 39 条
[1]   A predictive and adaptive control strategy to optimize the management of integrated energy systems in buildings [J].
Brandi, Silvio ;
Gallo, Antonio ;
Capozzoli, Alfonso .
ENERGY REPORTS, 2022, 8 :1550-1567
[2]   Fault Location in Power Distribution Systems via Deep Graph Convolutional Networks [J].
Chen, Kunjin ;
Hu, Jun ;
Zhang, Yu ;
Yu, Zhanqing ;
He, Jinliang .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (01) :119-131
[4]   Deep Reinforcement Learning for Scenario-Based Robust Economic Dispatch Strategy in Internet of Energy [J].
Fang, Dawei ;
Guan, Xin ;
Hu, Benran ;
Peng, Yu ;
Chen, Min ;
Hwang, Kai .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (12) :9654-9663
[5]   Linear/quadratic programming-based optimal power flow using linear power flow and absolute loss approximations [J].
Fortenbacher, P. ;
Demiray, T. .
INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2019, 107 :680-689
[6]   An Online Optimal Dispatch Schedule for CCHP Microgrids Based on Model Predictive Control [J].
Gu, Wei ;
Wang, Zhihe ;
Wu, Zhi ;
Luo, Zhao ;
Tang, Yiyuan ;
Wang, Jun .
IEEE TRANSACTIONS ON SMART GRID, 2017, 8 (05) :2332-2342
[7]   A GAN-Based Fully Model-Free Learning Method for Short-Term Scheduling of Large Power System [J].
Guan, Jinyu ;
Tang, Hao ;
Wang, Jiye ;
Yao, Jianguo ;
Wang, Ke ;
Mao, Wenbo .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2022, 37 (04) :2655-2665
[8]  
Haarnoja T, 2019, Arxiv, DOI arXiv:1812.05905
[9]   An autonomous control technology based on deep reinforcement learning for optimal active power dispatch [J].
Han, Xiaoyun ;
Mu, Chaoxu ;
Yan, Jun ;
Niu, Zeyuan .
INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2023, 145
[10]   Deep Reinforcement Learning for Autonomous Driving: A Survey [J].
Kiran, B. Ravi ;
Sobh, Ibrahim ;
Talpaert, Victor ;
Mannion, Patrick ;
Al Sallab, Ahmad A. ;
Yogamani, Senthil ;
Perez, Patrick .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) :4909-4926