Control of battery charging based on reinforcement learning and long short-term memory networks

被引:22
作者
Chang, Fangyuan [1 ]
Chen, Tao [1 ]
Su, Wencong [1 ]
Alsafasfeh, Qais [2 ]
机构
[1] Univ Michigan, Dept Elect & Comp Engn, Dearborn, MI 48128 USA
[2] Tafila Tech Univ, Dept Elect Power & Mechatron, Tafila 11183, Jordan
关键词
Energy storage battery/system; Smart charging control; Long short-term memory (LSTM); Reinforcement learning (RL); Electric vehicle (EV);
D O I
10.1016/j.compeleceng.2020.106670
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In an electricity market with time-varying pricing, uncontrolled charging of energy storage systems (ESSs) may increase charging costs. A novel battery charging control methodology based on reinforcement-learning (RL) is proposed in this paper to minimize the charging costs. A significant characteristic of this method is that it is model-free, with no need for a high-accuracy battery/ESS model. Therefore, it overcomes the challenges brought by limited types of battery models and non-ignorable parametric uncertainties in reality. Additionally, since an accurate prediction of fluctuating electricity prices can promote the control performance, a long short-term memory (LSTM) network is leveraged to improve the prediction precision. The final control objective is to seek an optimal charging portfolio to minimize charging costs. Moreover, the presented control algorithm provides a basic framework for a more complicated electricity market where various types of ESSs, generators, and loads exist. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:10
相关论文
共 22 条
[1]   Use and misuse of cosmetic contact lenses among US adolescents in Southeast Texas [J].
Berenson, Abbey B. ;
Chang, Mihyun ;
Hirth, Jacqueline M. ;
Merkley, Kevin H. .
ADOLESCENT HEALTH MEDICINE AND THERAPEUTICS, 2019, 10 :1-6
[2]   Indirect Customer-to-Customer Energy Trading With Reinforcement Learning [J].
Chen, Tao ;
Su, Wencong .
IEEE TRANSACTIONS ON SMART GRID, 2019, 10 (04) :4338-4348
[3]   Local Energy Trending Behavior Modeling With Deep Reinforcement Learning [J].
Chen, Tao ;
Su, Wencong .
IEEE ACCESS, 2018, 6 :62806-62814
[4]   Connections between Minkowski and cosmological correlation functions [J].
Chu, Shek Kit ;
Lee, Mang Hei Gordon ;
Lu, Shiyun ;
Tong, Xi ;
Wang, Yi ;
Zhou, Siyi .
JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS, 2018, (06)
[5]   Model-free control of thermostatically controlled loads connected to a district heating network [J].
Claessens, Bert J. ;
Vanhoudt, D. ;
Desmedt, J. ;
Ruelens, F. .
ENERGY AND BUILDINGS, 2018, 159 :1-10
[6]   Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid [J].
Foruzan, Elham ;
Soh, Leen-Kiat ;
Asgarpoor, Sohrab .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2018, 33 (05) :5749-5758
[7]  
García-Triviño P, 2016, 2016 INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS, ELECTRICAL DRIVES, AUTOMATION AND MOTION (SPEEDAM), P1099, DOI 10.1109/SPEEDAM.2016.7525924
[8]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[9]   Pricing mechanisms design for guiding electric vehicle charging to fill load valley [J].
Hu, Zechun ;
Zhan, Kaiqiao ;
Zhang, Hongcai ;
Song, Yonghua .
APPLIED ENERGY, 2016, 178 :155-163
[10]   Online Cyber-Attack Detection in Smart Grid: A Reinforcement Learning Approach [J].
Kurt, Mehmet Necip ;
Ogundijo, Oyetunji ;
Li, Chong ;
Wang, Xiaodong .
IEEE TRANSACTIONS ON SMART GRID, 2019, 10 (05) :5174-5185