Multi-agent reinforcement learning for multi-area power exchange

被引:1
作者
Xi, Jiachen [1 ]
Garcia, Alfredo [1 ]
Chen, Yu Christine [2 ]
Khatami, Roohallah [3 ]
机构
[1] Texas A&M Univ, Dept Ind & Syst Engn, College Stn, TX 77840 USA
[2] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC, Canada
[3] Southern Illinois Univ, Sch Elect Comp & Biomed Engn, Carbondale, IL USA
关键词
Power system; Reinforcement learning; Uncertainty; Decentralized algorithm; Actor-critic algorithm; MODEL; LOAD;
D O I
10.1016/j.epsr.2024.110711
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Increasing renewable integration leads to faster and more frequent fluctuations in the power system net-load (load minus non-dispatchable renewable generation) along with greater uncertainty in its forecast. These can exacerbate the computational burden of centralized power system optimization (or market clearing) that accounts for variability and uncertainty in net load. Another layer of complexity pertains to estimating accurate models of spatio-temporal net-load uncertainty. Taken together, decentralized approaches for learning to optimize (or to clear a market) using only local information are compelling to explore. This paper develops a decentralized multi-agent reinforcement learning (MARL) approach that seeks to learn optimal policies for operating interconnected power systems under uncertainty. The proposed method incurs less computational and communication burden compared to a centralized stochastic programming approach and offers improved privacy preservation. Numerical simulations involving a three-area test system yield desirable results, with the resulting average net operation costs being less than 5% away from those obtained in a benchmark centralized model predictive control solution.
引用
收藏
页数:9
相关论文
共 50 条
[21]   Automatic partitioning for multi-agent reinforcement learning [J].
Sun, R ;
Peterson, T .
ICONIP'98: THE FIFTH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING JOINTLY WITH JNNS'98: THE 1998 ANNUAL CONFERENCE OF THE JAPANESE NEURAL NETWORK SOCIETY - PROCEEDINGS, VOLS 1-3, 1998, :268-271
[22]   Reinforcement Learning for Multi-Agent Competitive Scenarios [J].
Coutinho, Manuel ;
Reis, Luis Paulo .
2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC), 2022, :130-135
[23]   Multi-Agent Reinforcement Learning for Highway Platooning [J].
Kolat, Mate ;
Becsi, Tamas .
ELECTRONICS, 2023, 12 (24)
[24]   SCM network with multi-agent reinforcement learning [J].
Zhao, Gang ;
Sun, Ruoying .
FIFTH WUHAN INTERNATIONAL CONFERENCE ON E-BUSINESS, VOLS 1-3, 2006, :1294-1300
[25]   Generalized learning automata for multi-agent reinforcement learning [J].
De Hauwere, Yann-Michael ;
Vrancx, Peter ;
Nowe, Ann .
AI COMMUNICATIONS, 2010, 23 (04) :311-324
[26]   Emergent cooperation from mutual acknowledgment exchange in multi-agent reinforcement learning [J].
Phan, Thomy ;
Sommer, Felix ;
Ritz, Fabian ;
Altmann, Philipp ;
Nuesslein, Jonas ;
Koelle, Michael ;
Belzner, Lenz ;
Linnhoff-Popien, Claudia .
AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2024, 38 (02)
[27]   A Distributed Multi-Agent Dynamic Area Coverage Algorithm Based on Reinforcement Learning [J].
Xiao, Jian ;
Wang, Gang ;
Zhang, Ying ;
Cheng, Lei .
IEEE ACCESS, 2020, 8 :33511-33521
[28]   Multi-Agent Reinforcement Learning for Smart Community Energy Management [J].
Wilk, Patrick ;
Wang, Ning ;
Li, Jie .
ENERGIES, 2024, 17 (20)
[29]   IntelligentCrowd: Mobile Crowdsensing via Multi-Agent Reinforcement Learning [J].
Chen, Yize ;
Wang, Hao .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2021, 5 (05) :840-845
[30]   A review of the applications of multi-agent reinforcement learning in smart factories [J].
Bahrpeyma, Fouad ;
Reichelt, Dirk .
FRONTIERS IN ROBOTICS AND AI, 2022, 9