Distributed Multiagent Reinforcement Learning With Action Networks for Dynamic Economic Dispatch

被引:8
作者
Hu, Chengfang [1 ]
Wen, Guanghui [2 ]
Wang, Shuai [3 ,4 ]
Fu, Junjie [2 ]
Yu, Wenwu [2 ]
机构
[1] Southeast Univ, Sch Cyber Sci & Engn, Nanjing 211189, Peoples R China
[2] Southeast Univ, Sch Math, Dept Syst Sci, Nanjing 211189, Peoples R China
[3] Beihang Univ, Res Inst Frontier Sci, Beijing 100191, Peoples R China
[4] Beihang Univ, Sch Comp Sci & Engn, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
Power demand; Heuristic algorithms; Prediction algorithms; Couplings; Approximation algorithms; Power system stability; Convex functions; Distributed optimization; dynamic economic dispatch; multiagent reinforcement learning (MARL); smart grids; VISIBLE IMAGE FUSION; PERFORMANCE; INFORMATION; ALGORITHM; PROTEIN;
D O I
10.1109/TNNLS.2023.3234049
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A new class of distributed multiagent reinforcement learning (MARL) algorithm suitable for problems with coupling constraints is proposed in this article to address the dynamic economic dispatch problem (DEDP) in smart grids. Specifically, the assumption made commonly in most existing results on the DEDP that the cost functions are known and/or convex is removed in this article. A distributed projection optimization algorithm is designed for the generation units to find the feasible power outputs satisfying the coupling constraints. By using a quadratic function to approximate the state-action value function of each generation unit, the approximate optimal solution of the original DEDP can be obtained by solving a convex optimization problem. Then, each action network utilizes a neural network (NN) to learn the relationship between the total power demand and the optimal power output of each generation unit, such that the algorithm obtains the generalization ability to predict the optimal power output distribution on an unseen total power demand. Furthermore, an improved experience replay mechanism is introduced into the action networks to improve the stability of the training process. Finally, the effectiveness and robustness of the proposed MARL algorithm are verified by simulation.
引用
收藏
页码:9553 / 9564
页数:12
相关论文
共 36 条
[1]  
[Anonymous], 2012, POWER GENERATION OPE
[2]   DYNAMIC PROGRAMMING [J].
BELLMAN, R .
SCIENCE, 1966, 153 (3731) :34-&
[3]  
Bertsekas D, 2009, Convex optimization theory, V1
[4]   Distributed Generator Coordination for Initialization and Anytime Optimization in Economic Dispatch [J].
Cherukuri, Ashish ;
Cortes, Jorge .
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2015, 2 (03) :226-237
[5]   Distributed Reinforcement Learning Algorithm for Dynamic Economic Dispatch With Unknown Generation Cost Functions [J].
Dai, Pengcheng ;
Yu, Wenwu ;
Wen, Guanghui ;
Baldi, Simone .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (04) :2258-2267
[6]  
Ding LF, 2020, IEEE IND ELEC, P3529, DOI [10.1109/iecon43393.2020.9255238, 10.1109/IECON43393.2020.9255238]
[7]   Particle swarm optimization to solving the economic dispatch considering the generator constraints [J].
Gaing, ZL .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2003, 18 (03) :1187-1195
[8]   Reinforcement Learning-Based Cooperative Optimal Output Regulation via Distributed Adaptive Internal Model [J].
Gao, Weinan ;
Mynuddin, Mohammed ;
Wunsch, Donald C. ;
Jiang, Zhong-Ping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) :5229-5240
[9]  
Godsil C., 2001, Algebraic graph theory, DOI [10.1007/978-1-4613-0163-9, DOI 10.1007/978-1-4613-0163-9]
[10]   Smart Grid Technologies: Communication Technologies and Standards [J].
Gungor, Vehbi C. ;
Sahin, Dilan ;
Kocak, Taskin ;
Ergut, Salih ;
Buccella, Concettina ;
Cecati, Carlo ;
Hancke, Gerhard P. .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2011, 7 (04) :529-539