Efficient and Stable Learning for Distribution Network Operation: A Model-Based Reinforcement Learning Approach

被引:0
作者
Yan, Dong [1 ]
Shi, Zhan [1 ]
Wang, Xinying [1 ]
Gao, Yiying [1 ]
Pu, Tianjiao [1 ]
Wang, Jiye [2 ]
机构
[1] China Elect Power Res Inst, Beijing 100192, Peoples R China
[2] State Grid Digital Technol Holding Co LTD, Beijing 100073, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimization; Costs; Biological system modeling; Power system stability; Training; Load modeling; Distribution networks; Adaptation models; Real-time systems; Feature extraction; economic operation; reinforcement learning; reward shaping; transition model;
D O I
10.17775/CSEEJPES.2023.09100
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This paper discusses the application of deep reinforcement learning (DRL) to the economic operation of power distribution networks, a complex system involving numerous flexible resources. Despite the improved control flexibility, traditional prediction-plus-optimization models struggle to adapt to rapidly shifting demands. Modern artificial intelligence (AI) methods, particularly DRL methods, promise faster decision-making but face challenges, including inefficient training and real-world application. This study introduces a reward evaluation system to assess the effectiveness of various strategies and proposes an enhanced algorithm based on the Model-based DRL approach. Incorporating a state transition model, the proposed algorithm augments data and enhances dynamic deduction, improving training efficiency. The effectiveness is demonstrated in various operational scenarios, showing notable enhancements in rationality and transfer generalization.
引用
收藏
页码:1080 / 1092
页数:13
相关论文
共 27 条
[1]   Online Optimal Power Scheduling of a Microgrid via Imitation Learning [J].
Gao, Shuhua ;
Xiang, Cheng ;
Yu, Ming ;
Tan, Kuan Tak ;
Lee, Tong Heng .
IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (02) :861-876
[2]  
Haarnoja T, 2018, PR MACH LEARN RES, V80
[3]  
Hafner D., 2020, ICLR
[4]   Optimal Operation of Power Systems With Energy Storage Under Uncertainty: A Scenario-Based Method With Strategic Sampling [J].
Hu, Ren ;
Li, Qifeng .
IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (02) :1249-1260
[5]  
Huang RK, 2022, IEEE T POWER SYST, V37, P4168, DOI [10.1109/TPWRS.2022.3155117, 10.1109/IECON49645.2022.9968533]
[6]  
Janner M, 2019, ADV NEUR IN, V32
[7]  
[季颖 Ji Ying], 2022, [控制与决策, Control and Decision], V37, P1675
[8]   Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning [J].
Ji, Ying ;
Wang, Jianhui ;
Xu, Jiacan ;
Fang, Xiaoke ;
Zhang, Huaguang .
ENERGIES, 2019, 12 (12)
[9]   Learning to Operate Distribution Networks With Safe Deep Reinforcement Learning [J].
Li, Hepeng ;
He, Haibo .
IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (03) :1860-1872
[10]   Distributed Tracking-ADMM Approach for Chance-Constrained Energy Management with Stochastic Wind Power [J].
Li, Wenjuan ;
Liu, Yungang ;
Liang, Huijun ;
Man, Yongchao ;
Li, Fengzhong .
CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2025, 11 (03) :1154-1164