Multi-agent reinforcement learning for multi-area power exchange

被引:1
作者
Xi, Jiachen [1 ]
Garcia, Alfredo [1 ]
Chen, Yu Christine [2 ]
Khatami, Roohallah [3 ]
机构
[1] Texas A&M Univ, Dept Ind & Syst Engn, College Stn, TX 77840 USA
[2] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC, Canada
[3] Southern Illinois Univ, Sch Elect Comp & Biomed Engn, Carbondale, IL USA
关键词
Power system; Reinforcement learning; Uncertainty; Decentralized algorithm; Actor-critic algorithm; MODEL; LOAD;
D O I
10.1016/j.epsr.2024.110711
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Increasing renewable integration leads to faster and more frequent fluctuations in the power system net-load (load minus non-dispatchable renewable generation) along with greater uncertainty in its forecast. These can exacerbate the computational burden of centralized power system optimization (or market clearing) that accounts for variability and uncertainty in net load. Another layer of complexity pertains to estimating accurate models of spatio-temporal net-load uncertainty. Taken together, decentralized approaches for learning to optimize (or to clear a market) using only local information are compelling to explore. This paper develops a decentralized multi-agent reinforcement learning (MARL) approach that seeks to learn optimal policies for operating interconnected power systems under uncertainty. The proposed method incurs less computational and communication burden compared to a centralized stochastic programming approach and offers improved privacy preservation. Numerical simulations involving a three-area test system yield desirable results, with the resulting average net operation costs being less than 5% away from those obtained in a benchmark centralized model predictive control solution.
引用
收藏
页数:9
相关论文
共 50 条
[41]   Multi-Agent Reinforcement Learning in Helicopter Airport Dispatching [J].
Liu, Zhifei ;
Dong, Qiang ;
Lai, Jun ;
Chen, Xiliang .
Computer Engineering and Applications, 2023, 59 (16) :285-294
[42]   Tacit Commitments Emergence in Multi-agent Reinforcement Learning [J].
Li, Boyin ;
Pu, Zhiqiang ;
Gao, Junlong ;
Yi, Jianqiang ;
Guo, Zhenyu .
NEURAL INFORMATION PROCESSING, PT I, ICONIP 2022, 2023, 13623 :27-36
[43]   Multi-Agent Reinforcement Learning for Cybersecurity: Classification and survey [J].
Finistrella, Salvo ;
Mariani, Stefano ;
Zambonelli, Franco .
INTELLIGENT SYSTEMS WITH APPLICATIONS, 2025, 26
[44]   QMNet: Importance-Aware Message Exchange for Decentralized Multi-Agent Reinforcement Learning [J].
Huang, Xiufeng ;
Zhou, Sheng .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) :4739-4751
[45]   The Cooperative Reinforcement Learning in a Multi-Agent Design System [J].
Liu, Hong ;
Wang, Jihua .
PROCEEDINGS OF THE 2013 IEEE 17TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD), 2013, :139-144
[46]   A review of cooperative multi-agent deep reinforcement learning [J].
Afshin Oroojlooy ;
Davood Hajinezhad .
Applied Intelligence, 2023, 53 :13677-13722
[47]   Emergence of linguistic conventions in multi-agent reinforcement learning [J].
Lipowska, Dorota ;
Lipowski, Adam .
PLOS ONE, 2018, 13 (11)
[48]   A multi-agent reinforcement learning approach to robot soccer [J].
Yong Duan ;
Bao Xia Cui ;
Xin He Xu .
Artificial Intelligence Review, 2012, 38 :193-211
[49]   Multi-Agent Reinforcement Learning With Decentralized Distribution Correction [J].
Li, Kuo ;
Jia, Qing-Shan .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 :1684-1696
[50]   An overview: Attention mechanisms in multi-agent reinforcement learning [J].
Hu, Kai ;
Xu, Keer ;
Xia, Qingfeng ;
Li, Mingyang ;
Song, Zhiqiang ;
Song, Lipeng ;
Sun, Ning .
NEUROCOMPUTING, 2024, 598