Multi-Agent Reinforcement Learning for Mobile Energy Resources Scheduling Amidst Typhoons

被引:1
|
作者
Zou, Yang [1 ]
Wang, Ziwei [1 ]
Huang, Jingsi [1 ]
Song, Jie [1 ]
Xu, Luo [2 ]
机构
[1] Peking Univ, Coll Engn, Beijing 100000, Peoples R China
[2] Princeton Univ, Dept Civil & Environm Engn, Princeton, NJ 08544 USA
基金
中国国家自然科学基金;
关键词
Tropical cyclones; Power systems; Power generation; Routing; Discharges (electric); Wind speed; Training; Power system resilience; multi-agent reinfor- cement learning; typhoon; power system recovery;
D O I
10.1109/TIA.2024.3463608
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Substantial threats are posed by typhoon events to critical electrical power infrastructure that can result in human casualties and significant economic losses. Both traditional and renewable power generation systems can be negatively impacted, leading to widespread power outages that compromise public safety. In response, we introduce a novel spatial-topological multi-agent reinforcement learning (ST-MARL) method to optimize post-typhoon power system recovery by leveraging mobile energy resources (MERs). Compared to existing MARL methods, the proposed ST-MARL method utilizes convolutional neural network (CNN) and graph convolutional network (GCN) models to extract spatial-topological information from environmental data like typhoon meteorological conditions and power system states, which facilitates real-time decision-making for the routing and scheduling of MERs. Furthermore, this ST-MARL method achieves a two-stage decision-making process by allocating MERs' rewards with an Alternating Current Optimal Power Flow(AC-OPF) model. This ensures safety and feasibility of the decisions. Additionally, we employ the centralized training and distributed execution (CTDE) paradigm to address coordination challenges among MERs. These approaches collectively aim to enhance coordination among MERs, improve economic efficiency, and ensure the power supply of critical loads during post-typhoon power system recovery. Finally, in our case study of the Hong Kong power network, the results indicate that our ST-MARL method outperforms two main existing MARL methods, achieving an improvement of 5.13% and 6.77% on the reward, respectively.
引用
收藏
页码:1683 / 1694
页数:12
相关论文
共 50 条
  • [1] Multi-Agent Reinforcement Learning Approach for Residential Microgrid Energy Scheduling
    Fang, Xiaohan
    Wang, Jinkuan
    Song, Guanru
    Han, Yinghua
    Zhao, Qiang
    Cao, Zhiao
    ENERGIES, 2020, 13 (01)
  • [2] IntelligentCrowd: Mobile Crowdsensing via Multi-Agent Reinforcement Learning
    Chen, Yize
    Wang, Hao
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2021, 5 (05): : 840 - 845
  • [3] Multi-agent hierarchical reinforcement learning for energy management
    Jendoubi, Imen
    Bouffard, Francois
    APPLIED ENERGY, 2023, 332
  • [4] Efficient Communications in Multi-Agent Reinforcement Learning for Mobile Applications
    Lv, Zefang
    Xiao, Liang
    Du, Yousong
    Zhu, Yunjun
    Han, Shuai
    Liu, Yong-Jin
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (09) : 12440 - 12454
  • [5] Exploring multi-agent reinforcement learning for unrelated parallel machine scheduling
    Zampella, Maria
    Otamendi, Urtzi
    Belaunzaran, Xabier
    Artetxe, Arkaitz
    Olaizola, Igor G.
    Sierra, Basilio
    Longo, Giuseppe
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (04)
  • [6] Dynamic scheduling of tasks in cloud manufacturing with multi-agent reinforcement learning
    Wang, Xiaohan
    Zhang, Lin
    Liu, Yongkui
    Li, Feng
    Chen, Zhen
    Zhao, Chun
    Bai, Tian
    JOURNAL OF MANUFACTURING SYSTEMS, 2022, 65 : 130 - 145
  • [7] Multi-agent reinforcement learning for electric vehicle decarbonized routing and scheduling
    Wang, Yi
    Qiu, Dawei
    He, Yinglong
    Zhou, Quan
    Strbac, Goran
    ENERGY, 2023, 284
  • [8] Multi-Agent Reinforcement Learning for Extended Flexible Job Shop Scheduling
    Peng, Shaoming
    Xiong, Gang
    Yang, Jing
    Shen, Zhen
    Tamir, Tariku Sinshaw
    Tao, Zhikun
    Han, Yunjun
    Wang, Fei-Yue
    MACHINES, 2024, 12 (01)
  • [9] Large-Scale Machine Learning Cluster Scheduling via Multi-Agent Graph Reinforcement Learning
    Zhao, Xiaoyang
    Wu, Chuan
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (04): : 4962 - 4974
  • [10] Departure Scheduling for Multi-airport System using Multi-agent Reinforcement Learning
    Li, Ziqi
    Cai, Kaiquan
    Zhao, Peng
    2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC, 2023,