Energy Management Strategy for a Hybrid Electric Vehicle Based on Deep Reinforcement Learning

被引:182
作者
Hu, Yue [1 ,2 ,3 ]
Li, Weimin [1 ,3 ,4 ]
Xu, Kun [1 ]
Zahid, Taimoor [1 ,2 ]
Qin, Feiyan [1 ,2 ]
Li, Chenming [5 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[2] Univ Chinese Acad Sci, Shenzhen Coll Adv Technol, Shenzhen 518055, Peoples R China
[3] Chinese Acad Sci, Jining Inst Adv Technol, Jining 272000, Peoples R China
[4] Chinese Univ Hong Kong, Dept Mech & Automat Engn, Hong Kong 999077, Hong Kong, Peoples R China
[5] Chinese Univ Hong Kong, Dept Elect Engn, Hong Kong 999077, Hong Kong, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2018年 / 8卷 / 02期
基金
中国国家自然科学基金;
关键词
hybrid electric vehicle; energy management strategy; deep reinforcement learning; online learning; OPTIMIZATION;
D O I
10.3390/app8020187
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
An energy management strategy (EMS) is important for hybrid electric vehicles (HEVs) since it plays a decisive role on the performance of the vehicle. However, the variation of future driving conditions deeply influences the effectiveness of the EMS. Most existing EMS methods simply follow predefined rules that are not adaptive to different driving conditions online. Therefore, it is useful that the EMS can learn from the environment or driving cycle. In this paper, a deep reinforcement learning (DRL)-based EMS is designed such that it can learn to select actions directly from the states without any prediction or predefined rules. Furthermore, a DRL-based online learning architecture is presented. It is significant for applying the DRL algorithm in HEV energy management under different driving conditions. Simulation experiments have been conducted using MATLAB and Advanced Vehicle Simulator (ADVISOR) co-simulation. Experimental results validate the effectiveness of the DRL-based EMS compared with the rule-based EMS in terms of fuel economy. The online learning architecture is also proved to be effective. The proposed method ensures the optimality, as well as real-time applicability, in HEVs.
引用
收藏
页数:15
相关论文
共 29 条
[1]  
[Anonymous], P 2017 IEEE INT VEH
[2]   Optimal energy management in a dual-storage fuel-cell hybrid vehicle using multi-dimensional dynamic programming [J].
Ansarey, Mehdi ;
Panahi, Masoud Shariat ;
Ziarati, Hussein ;
Mahjoob, Mohammad .
JOURNAL OF POWER SOURCES, 2014, 250 :359-371
[3]   Expert Level Control of Ramp Metering Based on Multi-Task Deep Reinforcement Learning [J].
Belletti, Francois ;
Haziza, Daniel ;
Gomes, Gabriel ;
Bayen, Alexandre M. .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 19 (04) :1198-1207
[4]   Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations [J].
Böhmer W. ;
Springenberg J.T. ;
Boedecker J. ;
Riedmiller M. ;
Obermayer K. .
KI - Kunstliche Intelligenz, 2015, 29 (04) :353-362
[5]   Energy management of a power-split plug-in hybrid electric vehicle based on genetic algorithm and quadratic programming [J].
Chen, Zheng ;
Mi, Chris Chunting ;
Xiong, Rui ;
Xu, Jun ;
You, Chenwen .
JOURNAL OF POWER SOURCES, 2014, 248 :416-426
[6]   Fuzzy-based blended control for the energy management of a parallel plug-in hybrid electric vehicle [J].
Denis, Nicolas ;
Dubois, Maxime R. ;
Desrochers, Alain .
IET INTELLIGENT TRANSPORT SYSTEMS, 2015, 9 (01) :30-37
[7]   An instantaneous optimization strategy based on efficiency maps for internal combustion engine/battery hybrid vehicles [J].
Gokce, Kursad ;
Ozdemir, Ayhan .
ENERGY CONVERSION AND MANAGEMENT, 2014, 81 :255-269
[8]   Optimal Energy Management for HEVs in Eco-Driving Applications Using Bi-Level MPC [J].
Guo, Lulu ;
Gao, Bingzhao ;
Gao, Ying ;
Chen, Hong .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2017, 18 (08) :2153-2162
[9]   An Online Learning Control Strategy for Hybrid Electric Vehicle Based on Fuzzy Q-Learning [J].
Hu, Yue ;
Li, Weimin ;
Xu, Hui ;
Xu, Guoqing .
ENERGIES, 2015, 8 (10) :11167-11186
[10]  
Kim J., 2017, AUTONOMOUS BRAKING S