Multiagent Reinforcement Learning With Learning Automata for Microgrid Energy Management and Decision Optimization

被引:0
|
作者
Fang, Xiaohan [1 ]
Wang, Jinkuan [1 ]
Yin, Chunhui [1 ]
Han, Yinghua [2 ]
Zhao, Qiang [3 ]
机构
[1] Northeastern Univ, Sch Informat Sci & Engn, Shenyang 110004, Peoples R China
[2] Northeastern Univ Qinhuangdao, Sch Comp & Commun Engn, Qinhuangdao 066004, Hebei, Peoples R China
[3] Northeastern Univ Qinhuangdao, Sch Control Engn, Qinhuangdao 066004, Hebei, Peoples R China
来源
PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020) | 2020年
关键词
Microgrid; Auction Market; Multiagent Reinforcement Learning; Learning Automata; Equilibrium Selection; SYSTEMS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As the increasing willingness of electric users to actively participate in power scheduling and to pursue self-interest, the management and optimization of residential microgrid confront higher requirements to balance the tradeoff between overall operational objectives and individual rights; and to resolve the influence of various uncertainties. Therefore, a multiagent reinforcement learning (MARL) approach is proposed in this paper for auction-based microgrid market. Distributed model-free reinforcement learning is used for each supplier and user to make reasonable market strategies; on the other hand, equilibrium-based game theory is combined in the learning process to ensure utility balance and supply-demand balance of the whole microgrid. Besides, to guarantee the efficiency of MARL, a learning automata (LA) is introduced to improve the strategy selection procedure which plays an essential role in algorithm optimization. A case study about microgrid market operation is conducted to verify the performance of the proposed approach.
引用
收藏
页码:779 / 784
页数:6
相关论文
共 50 条
  • [1] Reinforcement learning for microgrid energy management
    Kuznetsova, Elizaveta
    Li, Yan-Fu
    Ruiz, Carlos
    Zio, Enrico
    Ault, Graham
    Bell, Keith
    ENERGY, 2013, 59 : 133 - 146
  • [2] Learning Automata-Based Multiagent Reinforcement Learning for Optimization of Cooperative Tasks
    Zhang, Zhen
    Wang, Dongqing
    Gao, Junwei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (10) : 4639 - 4652
  • [3] Multiagent Bayesian Deep Reinforcement Learning for Microgrid Energy Management Under Communication Failures
    Zhou, Hao
    Aral, Atakan
    Brandic, Ivona
    Erol-Kantarci, Melike
    IEEE INTERNET OF THINGS JOURNAL, 2021, 9 (14): : 11685 - 11698
  • [4] Reinforcement Learning Based Optimal Energy Management of A Microgrid
    Iqbal, Saqib
    Mehran, Kamyar
    2022 IEEE ENERGY CONVERSION CONGRESS AND EXPOSITION (ECCE), 2022,
  • [5] Energy Management in Solar Microgrid via Reinforcement Learning
    Kofinas, Panagiotis
    Vouros, George
    Dounis, Anastasios I.
    9TH HELLENIC CONFERENCE ON ARTIFICIAL INTELLIGENCE (SETN 2016), 2016,
  • [6] Multi-agent Deep Reinforcement Learning for Distributed Energy Management and Strategy Optimization of Microgrid Market
    Fang, Xiaohan
    Zhao, Qiang
    Wang, Jinkuan
    Han, Yinghua
    Li, Yuchun
    SUSTAINABLE CITIES AND SOCIETY, 2021, 74
  • [7] Deep reinforcement learning for energy management in a microgrid with flexible demand
    Nakabi, Taha Abdelhalim
    Toivanen, Pekka
    SUSTAINABLE ENERGY GRIDS & NETWORKS, 2021, 25
  • [8] Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid
    Foruzan, Elham
    Soh, Leen-Kiat
    Asgarpoor, Sohrab
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2018, 33 (05) : 5749 - 5758
  • [9] Deep Reinforcement Learning Based Double-layer Optimization Method for Energy Management of Microgrid
    Yu, Qinglei
    Xu, Wei
    Lv, Jianhu
    Wang, Ying
    Zhang, Kaifeng
    2023 5TH ASIA ENERGY AND ELECTRICAL ENGINEERING SYMPOSIUM, AEEES, 2023, : 1016 - 1022
  • [10] Reinforcement Learning Based Energy Dispatch Strategy and Control Optimization of Microgrid
    Liu J.-H.
    Ke Z.-M.
    Zhou W.-H.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2020, 43 (01): : 28 - 34