A Cloud-Edge Collaboration Solution for Distribution Network Reconfiguration Using Multi-Agent Deep Reinforcement Learning

被引:13
作者
Gao, Hongjun [1 ]
Wang, Renjun [1 ]
He, Shuaijia [1 ]
Wang, Lingfeng [2 ]
Liu, Junyong
Chen, Zhe [3 ]
机构
[1] Sichuan Univ, Coll Elect Engn, Chengdu 610065, Peoples R China
[2] Univ Wisconsin, Dept Elect Engn & Comp Sci, Milwaukee, WI 53211 USA
[3] Aalborg Univ, Dept Energy Technol, DK-9220 Aalborg, Denmark
基金
中国国家自然科学基金;
关键词
Batch reinforcement learning; cloud-edge collaboration; distribution network reconfiguration; multi-agent deep reinforcement learning; safe reinforcement learning;
D O I
10.1109/TPWRS.2023.3296463
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Network reconfiguration can maintain the optimal operation of distribution network with increasing penetration of distributed generations (DGs). However, network reconfiguration problems may not be solved quickly by traditional methods in large-scale distribution networks. In this context, a cloud-edge collaboration framework based on multi-agent deep reinforcement learning (MADRL) is proposed, where the MADRL model can be trained centrally in the cloud center and decentrally executed in edge servers to reduce the training cost and execution latency of MADRL. In addition, a discrete multi-agent soft actor-critic algorithm (MASAC) is introduced as the basic algorithm to address the non-stationary environment problem in MADRL. Then, online safe learning and offline safe learning are combined for the distribution network reconfiguration task in practice to update the neural networks of MADRL under constraints. Specifically, a novel offline algorithm called multi-agent constraints penalized Q-learning (MACPQ) is proposed to reduce the cost of trial-and-error process of MADRL while allowing agents to pre-train their policies from a historical dataset considering constraints. Meanwhile, a new online MADRL method called primal-dual MASAC is proposed to further improve the performance of agents by directly interacting with the physical distribution network under the safe action exploration. Finally, the superiority of the proposed methods is verified in IEEE 33-bus system and a practical 445-bus system.
引用
收藏
页码:3867 / 3879
页数:13
相关论文
共 37 条
[31]   Cloud-Edge Orchestration for the Internet of Things: Architecture and AI-Powered Data Processing [J].
Wu, Yulei .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (16) :12792-12805
[32]  
Xu HR, 2022, AAAI CONF ARTIF INTE, P8753
[33]   Cloud-Edge Collaborative SFC Mapping for Industrial IoT Using Deep Reinforcement Learning [J].
Xu, Siya ;
Li, Yimin ;
Guo, Shaoyong ;
Lei, Chenghao ;
Liu, Di ;
Qiu, Xuesong .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (06) :4158-4168
[34]   A Multi-Agent Deep Reinforcement Learning Method for Cooperative Load Frequency Control of a Multi-Area Power System [J].
Yan, Ziming ;
Xu, Yan .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2020, 35 (06) :4599-4608
[35]   Multi-Agent Safe Policy Learning for Power Management of Networked Microgrids [J].
Zhang, Qianzhi ;
Dehghanpour, Kaveh ;
Wang, Zhaoyu ;
Qiu, Feng ;
Zhao, Dongbo .
IEEE TRANSACTIONS ON SMART GRID, 2021, 12 (02) :1048-1062
[36]   An Edge-Cloud Integrated Solution for Buildings Demand Response Using Reinforcement Learning [J].
Zhang, Xiangyu ;
Biagioni, Dave ;
Cai, Mengmeng ;
Graf, Peter ;
Rahman, Saifur .
IEEE TRANSACTIONS ON SMART GRID, 2021, 12 (01) :420-431
[37]   CE-IoT: Cost-Effective Cloud-Edge Resource Provisioning for Heterogeneous IoT Applications [J].
Zhou, Zhi ;
Yu, Shuai ;
Chen, Wuhui ;
Chen, Xu .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (09) :8600-8614