Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment

被引:41
作者
Chen, Jiaxin [1 ]
Shu, Hong [1 ]
Tang, Xiaolin [1 ]
Liu, Teng [2 ]
Wang, Weida [3 ]
机构
[1] Chongqing Univ, Coll Mech & Vehicle Engn, Chongqing 400044, Peoples R China
[2] Univ Waterloo, Dept Mech & Mech Engn, Waterloo, ON N2L 3G1, Canada
[3] Beijing Inst Technol, Sch Mech Engn, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Hybrid electric vehicle; Road recognition network; Deep reinforcement learning; Multi-objective control network; Energy management strategy; ENERGY MANAGEMENT; ELECTRIC VEHICLES; MODEL; STRATEGY;
D O I
10.1016/j.energy.2021.122123
中图分类号
O414.1 [热力学];
学科分类号
摘要
Aiming at promoting the intelligent development of control technology for new energy vehicles and showing the outstanding advantages of deep reinforcement learning (DRL), this paper trained a VGG16based road recognition convolutional neural network (CNN) at first. Lots of high-definition images of five typical roads are collected by the racing game Dust Rally 2.0, including dry asphalt, wet asphalt, snow, dry cobblestone, and wet cobblestone. Then, a time-varying driving environment model was established, involving driving images, road slope, longitudinal speed, and the number of passengers. Finally, a stereoscopic control network suitable for nine-dimensional state space and three-dimensional action space was built, and for parallel hybrid electric vehicles (HEVs) with the P3 structure, a deep q-network (DQN)based energy management strategy (EMS) achieving multi-objective control was proposed, including the fine-tuning strategy of the motor speed to maintain the optimal slip rate during braking, the engine power control strategy and the continuously variable transmission (CVT) gear ratio control strategy. Simulation results show under the influence of some factors such as tree shade and image compression, the road recognition network has the highest accuracy for snow roads and wet asphalt roads. Three types of control strategies learned simultaneously by the stereoscopic control network not only maintain the near-optimal slip rate in the braking period but also achieve a fuel consumption of 4788.93 g, while dynamic programming (DP)-based EMS gets a fuel consumption of 4295.61 g. Moreover, even DP-based EMS only contains three states and two actions, the time consumed for DP-based EMS and DQN-based EMS to run the speed cycle of 3602s is about 4911s and 10s, respectively. Therefore, the optimization and real-time performance of DRL-based EMS can be guaranteed. (c) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页数:15
相关论文
共 43 条
  • [11] Lillicrap T. P., 2015, ARXIV150902971
  • [12] Human resource allocation for multiple scientific research projects via improved pigeon-inspired optimization algorithm
    Liu, ChuanBin
    Ma, YongHong
    Yin, Hang
    Yu, LeAn
    [J]. SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2021, 64 (01) : 139 - 147
  • [13] Study on deep reinforcement learning techniques for building energy consumption forecasting
    Liu, Tao
    Tan, Zehan
    Xu, Chengliang
    Chen, Huanxin
    Li, Zhengfei
    [J]. ENERGY AND BUILDINGS, 2020, 208
  • [14] Adaptive Hierarchical Energy Management Design for a Plug-In Hybrid Electric Vehicle
    Liu, Teng
    Tang, Xiaolin
    Wang, Hong
    Yu, Huilong
    Hu, Xiaosong
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (12) : 11513 - 11522
  • [15] A Heuristic Planning Reinforcement Learning-Based Energy Management for Power-Split Plug-in Hybrid Electric Vehicles
    Liu, Teng
    Hu, Xiaosong
    Hu, Weihao
    Zou, Yuan
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (12) : 6436 - 6445
  • [16] Reinforcement Learning Optimized Look-Ahead Energy Management of a Parallel Hybrid Electric Vehicle
    Liu, Teng
    Hu, Xiaosong
    Li, Shengbo Eben
    Cao, Dongpu
    [J]. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2017, 22 (04) : 1497 - 1507
  • [17] Mnih V, 2016, PR MACH LEARN RES, V48
  • [18] Human-level control through deep reinforcement learning
    Mnih, Volodymyr
    Kavukcuoglu, Koray
    Silver, David
    Rusu, Andrei A.
    Veness, Joel
    Bellemare, Marc G.
    Graves, Alex
    Riedmiller, Martin
    Fidjeland, Andreas K.
    Ostrovski, Georg
    Petersen, Stig
    Beattie, Charles
    Sadik, Amir
    Antonoglou, Ioannis
    King, Helen
    Kumaran, Dharshan
    Wierstra, Daan
    Legg, Shane
    Hassabis, Demis
    [J]. NATURE, 2015, 518 (7540) : 529 - 533
  • [19] Onori S, 2010, PROCEEDINGS OF THE ASME DYNAMIC SYSTEMS AND CONTROL CONFERENCE 2010, VOL 1, P499
  • [20] Rule based energy management strategy for a series-parallel plug-in hybrid electric bus optimized by dynamic programming
    Peng, Jiankun
    He, Hongwen
    Xiong, Rui
    [J]. APPLIED ENERGY, 2017, 185 : 1633 - 1643