A multi-agent deep reinforcement learning based energy management for behind-the-meter resources

被引:7
作者
Wilk, Patrick [1 ]
Wang, Ning [2 ]
Li, Jie [1 ]
机构
[1] Rowan Univ, Elect & Comp Engn Dept, Glassboro, NJ 08028 USA
[2] Rowan Univ, Comp Sci Dept, Glassboro, NJ USA
关键词
Energy management; Multi-agent deep reinforcement learning; Behind-the-meter resources; MICROGRIDS;
D O I
10.1016/j.tej.2022.107129
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
The future communities are becoming more and more electrically connected via increased penetrations of behind-the-meter (BTM) resources, specifically, electric vehicles (EVs), smart buildings (SBs), and distributed renewables. The electricity infrastructure is thus seeing increased challenges in its reliable, secure, and economic operation and control with increased and hard to predict demands (due to EV charging and demand management of SBs), fluctuating generation from renewables, as well as their plug-N-play dynamics. Reinforcement learning has been extensively used to enable network entities to obtain optimal policies. The recent development of deep learning has enabled deep reinforcement learning (DRL) to drive optimal policies for sophisticated and capable agents, which can outperform conventional rule-based operation policies in applications such as games, natural language processing, and biology. Furthermore, DRL has shown promising results in many resource management tasks. Numerous studies have been conducted on the application of single-agent DRL to energy management. In this paper, a fully distributed energy management framework based on multi-agent deep reinforcement learning (MADRL) is proposed to optimize the BTM resource operations and improve essential service delivery to community residents.
引用
收藏
页数:8
相关论文
共 45 条
[1]   Reinforcement Learning Based EV Charging Management Systems-A Review [J].
Abdullah, Heba M. ;
Gastli, Adel ;
Ben-Brahim, Lazhar .
IEEE ACCESS, 2021, 9 :41506-41531
[2]   Multiagent Reinforcement Learning for Energy Management in Residential Buildings [J].
Ahrarinouri, Mehdi ;
Rastegar, Mohammad ;
Seifi, Ali Reza .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (01) :659-666
[3]  
[Anonymous], Air Quality, Energy & Sustainability
[4]  
Arulkumaran K., 2017, A brief survey of deep reinforcement learning, P37
[5]   Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review [J].
Arwa, Erick O. ;
Folly, Komla A. .
IEEE ACCESS, 2020, 8 :208992-209007
[6]  
Bahdanau Dzmitry, 2017, An actor-critic algorithm for sequence prediction
[7]   The Arcade Learning Environment: An Evaluation Platform for General Agents [J].
Bellemare, Marc G. ;
Naddaf, Yavar ;
Veness, Joel ;
Bowling, Michael .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2013, 47 :253-279
[8]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[9]   User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement Learning [J].
Chen, Song-Jen ;
Chiu, Wei-Yu ;
Liu, Wei-Jen .
IEEE ACCESS, 2021, 9 :161627-161637
[10]   Deep Reinforcement Learning for Internet of Things: A Comprehensive Survey [J].
Chen, Wuhui ;
Qiu, Xiaoyu ;
Cai, Ting ;
Dai, Hong-Ning ;
Zheng, Zibin ;
Zhang, Yan .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2021, 23 (03) :1659-1692