A multi-agent reinforcement learning method for distribution system restoration considering dynamic network reconfiguration

被引:3
|
作者
Si, Ruiqi [1 ]
Chen, Siyuan [1 ]
Zhang, Jun [1 ]
Xu, Jian [1 ]
Zhang, Luxi [2 ]
机构
[1] Wuhan Univ, Sch Elect Engn & Automat, Wuhan 430072, Peoples R China
[2] Brandeis Univ, Waltham, MA 02454 USA
基金
国家重点研发计划;
关键词
Deep reinforcement learning; Multi-agent reinforcement learning; Distribution system restoration; Distribution network; Microgrid; UNBALANCED DISTRIBUTION-SYSTEMS; SERVICE RESTORATION; MANAGEMENT; MODEL;
D O I
10.1016/j.apenergy.2024.123625
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Extreme weather, chain failures, and other events have increased the probability of wide-area blackouts, which highlights the importance of rapidly and efficiently restoring the affected loads. This paper proposes a multi-agent reinforcement learning method for distribution system restoration. Firstly, considering that the topology of the distribution system may change during network reconfiguration, a dynamic agent network (DAN) architecture is designed to address the challenge of input dimensions changing in neural network. Two encoders are created to capture observations of the environment and other agents respectively, and an attention mechanism is used to aggregate an arbitrary-sized neighboring agent feature set. Then, considering the operation constraints of the DSR, an action mask mechanism is implemented to filter out invalid actions, ensuring the security of the strategy. Finally, an IEEE 123-node test system is used for validation, and the experimental results showed that the proposed algorithm can effectively assist agents in accomplishing collaborative DSR tasks.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] A survey of multi-agent deep reinforcement learning with communication
    Zhu, Changxi
    Dastani, Mehdi
    Wang, Shihan
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2024, 38 (01)
  • [42] Multi-agent Deep Reinforcement Learning for Task Allocation in Dynamic Environment
    Ben Noureddine, Dhouha
    Gharbi, Atef
    Ben Ahmed, Samir
    ICSOFT: PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON SOFTWARE TECHNOLOGIES, 2017, : 17 - 26
  • [43] Dynamic flexible scheduling with transportation constraints by multi-agent reinforcement learning
    Zhang, Lixiang
    Yan, Yan
    Hu, Yaoguang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 134
  • [44] Multi-Agent Reinforcement Learning for Dynamic Ocean Monitoring by a Swarm of Buoys
    Kouzehgar, Maryam
    Meghjani, Malika
    Bouffanais, Roland
    GLOBAL OCEANS 2020: SINGAPORE - U.S. GULF COAST, 2020,
  • [45] Dynamic scheduling of tasks in cloud manufacturing with multi-agent reinforcement learning
    Wang, Xiaohan
    Zhang, Lin
    Liu, Yongkui
    Li, Feng
    Chen, Zhen
    Zhao, Chun
    Bai, Tian
    JOURNAL OF MANUFACTURING SYSTEMS, 2022, 65 : 130 - 145
  • [46] Multi-agent hierarchical reinforcement learning for energy management
    Jendoubi, Imen
    Bouffard, Francois
    APPLIED ENERGY, 2023, 332
  • [47] RoMA: Resilient Multi-Agent Reinforcement Learning with Dynamic Participating Agents
    Tang, Xuting
    Xu, Jia
    Wang, Shusen
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 247 - 255
  • [48] Noise Distribution Decomposition Based Multi-Agent Distributional Reinforcement Learning
    Geng, Wei
    Xiao, Baidi
    Li, Rongpeng
    Wei, Ning
    Wang, Dong
    Zhao, Zhifeng
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (03) : 2301 - 2314
  • [49] A Full Decentralized Multi-Agent Service Restoration for Distribution Network With DGs
    Li, Wenguo
    Li, Yong
    Chen, Chun
    Tan, Yi
    Cao, Yijia
    Zhang, Mingmin
    Peng, Yanjian
    Chen, Shuai
    IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (02) : 1100 - 1111
  • [50] Decentralized network level adaptive signal control by multi-agent deep reinforcement learning
    Gong, Yaobang
    Abdel-Aty, Mohamed
    Cai, Qing
    Rahman, Md Sharikur
    TRANSPORTATION RESEARCH INTERDISCIPLINARY PERSPECTIVES, 2019, 1