Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid

被引:214
作者
Foruzan, Elham [1 ,2 ]
Soh, Leen-Kiat [3 ]
Asgarpoor, Sohrab [4 ]
机构
[1] Univ Nebraska, Dept Elect & Comp Engn, Lincoln, NE 68588 USA
[2] Univ Nebraska, Comp Sci & Engn, Lincoln, NE 68588 USA
[3] Univ Nebraska, Dept Comp Sci & Engn, Lincoln, NE 68588 USA
[4] Univ Nebraska, Dept Elect & Comp Engn, Lincoln, NE 68588 USA
关键词
Microgrid; reinforcement learning; distributed control; renewable generation; MULTIAGENT SYSTEM; GENERATION;
D O I
10.1109/TPWRS.2018.2823641
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, a multiagent-based model is used to study distributed energy management in a microgrid (MG). The suppliers and consumers of electricity are modeled as autonomous agents, capable of making local decisions in order to maximize their own profit in a multiagent environment. For every supplier, a lack of information about customers and other suppliers creates challenges to optimal decision making in order to maximize its return. Similarly, customers face difficulty in scheduling their energy consumption without any information about suppliers and electricity prices. Additionally, there are several uncertainties involved in the nature of MGs due to variability in renewable generation output power and continuous fluctuation of customers' consumption. In order to prevail over these challenges, a reinforcement learning algorithm was developed to allow generation resources, distributed storages, and customers to develop optimal strategies for energy management and load scheduling without prior information about each other and the MG system. Case studies are provided to show how the overall performance of all entities converges as an emergent behavior to the Nash equilibrium, benefiting all agents.
引用
收藏
页码:5749 / 5758
页数:10
相关论文
共 30 条
[11]   Multi-agent Coordination in Market Environment for Future Electricity Infrastructure based on Microgrids [J].
Duan, Rui ;
Deconinck, Geert .
2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, :3959-3964
[12]   Multi-Agent System for Distributed Management of Microgrids [J].
Eddy, Y. S. Foo. ;
Gooi, H. B. ;
Chen, S. X. .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2015, 30 (01) :24-34
[13]   The Path of the Smart Grid [J].
Farhangi, Hassan .
IEEE POWER & ENERGY MAGAZINE, 2010, 8 (01) :18-28
[14]  
Foruzan E., 2016, N AM POW S NAPS 2016, P1
[15]   Nash Q-learning for general-sum stochastic games [J].
Hu, JL ;
Wellman, MP .
JOURNAL OF MACHINE LEARNING RESEARCH, 2004, 4 (06) :1039-1069
[16]   Reinforcement learning: A survey [J].
Kaelbling, LP ;
Littman, ML ;
Moore, AW .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1996, 4 :237-285
[17]   Dynamic Pricing and Energy Consumption Scheduling With Reinforcement Learning [J].
Kim, Byung-Gook ;
Zhang, Yu ;
van der Schaar, Mihaela ;
Lee, Jang-Won .
IEEE TRANSACTIONS ON SMART GRID, 2016, 7 (05) :2187-2198
[18]   Reinforcement learning for microgrid energy management [J].
Kuznetsova, Elizaveta ;
Li, Yan-Fu ;
Ruiz, Carlos ;
Zio, Enrico ;
Ault, Graham ;
Bell, Keith .
ENERGY, 2013, 59 :133-146
[19]   Distributed Smart-Home Decision-Making in a Hierarchical Interactive Smart Grid Architecture [J].
Li, Ding ;
Jayaweera, Sudharman K. .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2015, 26 (01) :75-84
[20]   Strategic bidding using reinforcement learning for load shedding in microgrids [J].
Lim, Yujin ;
Kim, Hak-Man .
COMPUTERS & ELECTRICAL ENGINEERING, 2014, 40 (05) :1439-1446