Multi-agent reinforcement learning for multi-area power exchange

被引:1
作者
Xi, Jiachen [1 ]
Garcia, Alfredo [1 ]
Chen, Yu Christine [2 ]
Khatami, Roohallah [3 ]
机构
[1] Texas A&M Univ, Dept Ind & Syst Engn, College Stn, TX 77840 USA
[2] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC, Canada
[3] Southern Illinois Univ, Sch Elect Comp & Biomed Engn, Carbondale, IL USA
关键词
Power system; Reinforcement learning; Uncertainty; Decentralized algorithm; Actor-critic algorithm; MODEL; LOAD;
D O I
10.1016/j.epsr.2024.110711
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Increasing renewable integration leads to faster and more frequent fluctuations in the power system net-load (load minus non-dispatchable renewable generation) along with greater uncertainty in its forecast. These can exacerbate the computational burden of centralized power system optimization (or market clearing) that accounts for variability and uncertainty in net load. Another layer of complexity pertains to estimating accurate models of spatio-temporal net-load uncertainty. Taken together, decentralized approaches for learning to optimize (or to clear a market) using only local information are compelling to explore. This paper develops a decentralized multi-agent reinforcement learning (MARL) approach that seeks to learn optimal policies for operating interconnected power systems under uncertainty. The proposed method incurs less computational and communication burden compared to a centralized stochastic programming approach and offers improved privacy preservation. Numerical simulations involving a three-area test system yield desirable results, with the resulting average net operation costs being less than 5% away from those obtained in a benchmark centralized model predictive control solution.
引用
收藏
页数:9
相关论文
共 50 条
[31]   Multi-RAT Access based on Multi-Agent Reinforcement Learning [J].
Yan, Mu ;
Feng, Gang ;
Qin, Shuang .
GLOBECOM 2017 - 2017 IEEE GLOBAL COMMUNICATIONS CONFERENCE, 2017,
[32]   Multi-Agent Reinforcement Learning with Multi-Step Generative Models [J].
Krupnik, Orr ;
Mordatch, Igor ;
Tamar, Aviv .
CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
[33]   Cooperative Reinforcement Learning Algorithm to Distributed Power System Based on Multi-Agent [J].
Gao, La-mei ;
Zeng, Jun ;
Wu, Jie ;
Li, Min .
2009 3RD INTERNATIONAL CONFERENCE ON POWER ELECTRONICS SYSTEMS AND APPLICATIONS: ELECTRIC VEHICLE AND GREEN ENERGY, 2009, :53-53
[34]   Toward Smarter Power Transformers in Microgrids: A Multi-agent Reinforcement Learning for Diagnostic [J].
Laayati, Oussama ;
El-Bazi, Nabil ;
El Hadraoui, Hicham ;
Ennawaoui, Chouaib ;
Chebak, Ahmed ;
Bouzi, Mostafa .
DIGITAL TECHNOLOGIES AND APPLICATIONS, ICDTA 2023, VOL 2, 2023, 669 :640-649
[35]   Cooperative Learning of Multi-Agent Systems Via Reinforcement Learning [J].
Wang, Xin ;
Zhao, Chen ;
Huang, Tingwen ;
Chakrabarti, Prasun ;
Kurths, Juergen .
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2023, 9 :13-23
[36]   Learning to Communicate for Mobile Sensing with Multi-agent Reinforcement Learning [J].
Zhang, Bolei ;
Liu, Junliang ;
Xiao, Fu .
WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT II, 2021, 12938 :612-623
[37]   Multi-agent cooperative learning research based on reinforcement learning [J].
Liu, Fei ;
Zeng, Guangzhou .
2006 10TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, PROCEEDINGS, VOLS 1 AND 2, 2006, :1408-1413
[38]   Learning of Communication Codes in Multi-Agent Reinforcement Learning Problem [J].
Kasai, Tatsuya ;
Tenmoto, Hiroshi ;
Kamiya, Akimoto .
2008 IEEE CONFERENCE ON SOFT COMPUTING IN INDUSTRIAL APPLICATIONS SMCIA/08, 2009, :1-+
[39]   Cooperative Multi-Agent Reinforcement Learning With Approximate Model Learning [J].
Park, Young Joon ;
Lee, Young Jae ;
Kim, Seoung Bum .
IEEE ACCESS, 2020, 8 :125389-125400
[40]   Multi-agent reinforcement learning based on local communication [J].
Zhang, Wenxu ;
Ma, Lei ;
Li, Xiaonan .
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 6) :15357-15366