Deep Reinforcement Learning-Based Dynamic Droop Control Strategy for Real-Time Optimal Operation and Frequency Regulation

被引:3
作者
Lee, Woon-Gyu [1 ]
Kim, Hak-Man [2 ,3 ]
机构
[1] Incheon Natl Univ, Dept Elect Engn, Incheon 406772, South Korea
[2] Incheon Natl Univ, Dept Elect Engn, Incheon 406772, South Korea
[3] Incheon Natl Univ, Res Inst Northeast Asian Super Grid, Incheon 406772, South Korea
关键词
Costs; Real-time systems; Frequency control; Training; Heuristic algorithms; Voltage control; Reactive power; Microgrids; dynamic droop control; real-time optimal operation; frequency regulation; deep reinforcement learning; twin delayed deep deterministic policy gradient; POWER ECONOMIC-DISPATCH; GENERATION; SYSTEM; OPTIMIZATION; LOAD;
D O I
10.1109/TSTE.2024.3454298
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The optimal operation of an islanded AC microgrid system is achieved by proper power sharing among generators. The conventional distributed cost optimization strategies use a communication system to converge incremental costs. However, these methods are dependent on the distributed communication network and do not consider frequency deviations for real-time load variability. Thus, this paper proposes a DRL-based dynamic droop control strategy. The proposed twin delayed DDPG-based DRL interacts with the environment to learn the optimal droop gain for reducing generation cost and frequency deviation. The trained agent uses local information to transmit dynamic droop gains to the primary controller as demand load changes. It can simplify the control structure by omitting the secondary layer for optimal operation and power quality. The proposed control strategy is designed with a centralized DRL training process and distributed execution, enabling real-time distributed optimal operation. The comparison results with conventional distributed strategy confirms better control performance of the proposed strategy. Finally, the feasibility of the proposed strategy was verified by experiment on AC microgrid testbed.
引用
收藏
页码:284 / 294
页数:11
相关论文
共 29 条
[1]   Deep Reinforcement Learning Based Unit Commitment Scheduling under Load and Wind Power Uncertainty [J].
Ajagekar, Akshay ;
You, Fengqi .
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2023, 14 (02) :803-812
[2]   Distributed Consensus-Based Economic Dispatch With Transmission Losses [J].
Binetti, Giulio ;
Davoudi, Ali ;
Lewis, Frank L. ;
Naso, David ;
Turchiano, Biagio .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2014, 29 (04) :1711-1720
[3]   Improved genetic algorithm for power economic dispatch of units with valve-point effects and multiple fuels [J].
Chiang, CL .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2005, 20 (04) :1690-1699
[4]   Reinforcement learning in continuous time and space [J].
Doya, K .
NEURAL COMPUTATION, 2000, 12 (01) :219-245
[5]   Deep Reinforcement Learning From Demonstrations to Assist Service Restoration in Islanded Microgrids [J].
Du, Yan ;
Wu, Di .
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2022, 13 (02) :1062-1072
[6]   Multi-Agent System for Distributed Management of Microgrids [J].
Eddy, Y. S. Foo. ;
Gooi, H. B. ;
Chen, S. X. .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2015, 30 (01) :24-34
[7]   Twin Delayed Deep Deterministic Policy Gradient (TD3) Based Virtual Inertia Control for Inverter-Interfacing DGs in Microgrids [J].
Egbomwan, Osarodion Emmanuel ;
Liu, Shichao ;
Chaoui, Hicham .
IEEE SYSTEMS JOURNAL, 2023, 17 (02) :2122-2132
[8]   Improved Random Drift Particle Swarm Optimization With Self-Adaptive Mechanism for Solving the Power Economic Dispatch Problem [J].
Elsayed, Wael Taha ;
Hegazy, Yasser G. ;
El-bages, Mohamed S. ;
Bendary, Fahmy M. .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2017, 13 (03) :1017-1026
[9]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[10]   Coordinated Active Power Dispatch for a Microgrid via Distributed Lambda Iteration [J].
Hu, Jianqiang ;
Chen, Michael Z. Q. ;
Cao, Jinde ;
Guerrero, Josep M. .
IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2017, 7 (02) :250-261