Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment

被引:41
作者
Chen, Jiaxin [1 ]
Shu, Hong [1 ]
Tang, Xiaolin [1 ]
Liu, Teng [2 ]
Wang, Weida [3 ]
机构
[1] Chongqing Univ, Coll Mech & Vehicle Engn, Chongqing 400044, Peoples R China
[2] Univ Waterloo, Dept Mech & Mech Engn, Waterloo, ON N2L 3G1, Canada
[3] Beijing Inst Technol, Sch Mech Engn, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Hybrid electric vehicle; Road recognition network; Deep reinforcement learning; Multi-objective control network; Energy management strategy; ENERGY MANAGEMENT; ELECTRIC VEHICLES; MODEL; STRATEGY;
D O I
10.1016/j.energy.2021.122123
中图分类号
O414.1 [热力学];
学科分类号
摘要
Aiming at promoting the intelligent development of control technology for new energy vehicles and showing the outstanding advantages of deep reinforcement learning (DRL), this paper trained a VGG16based road recognition convolutional neural network (CNN) at first. Lots of high-definition images of five typical roads are collected by the racing game Dust Rally 2.0, including dry asphalt, wet asphalt, snow, dry cobblestone, and wet cobblestone. Then, a time-varying driving environment model was established, involving driving images, road slope, longitudinal speed, and the number of passengers. Finally, a stereoscopic control network suitable for nine-dimensional state space and three-dimensional action space was built, and for parallel hybrid electric vehicles (HEVs) with the P3 structure, a deep q-network (DQN)based energy management strategy (EMS) achieving multi-objective control was proposed, including the fine-tuning strategy of the motor speed to maintain the optimal slip rate during braking, the engine power control strategy and the continuously variable transmission (CVT) gear ratio control strategy. Simulation results show under the influence of some factors such as tree shade and image compression, the road recognition network has the highest accuracy for snow roads and wet asphalt roads. Three types of control strategies learned simultaneously by the stereoscopic control network not only maintain the near-optimal slip rate in the braking period but also achieve a fuel consumption of 4788.93 g, while dynamic programming (DP)-based EMS gets a fuel consumption of 4295.61 g. Moreover, even DP-based EMS only contains three states and two actions, the time consumed for DP-based EMS and DQN-based EMS to run the speed cycle of 3602s is about 4911s and 10s, respectively. Therefore, the optimization and real-time performance of DRL-based EMS can be guaranteed. (c) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页数:15
相关论文
共 43 条
  • [21] Schaul T., 2016, CORR
  • [22] Mastering the game of Go with deep neural networks and tree search
    Silver, David
    Huang, Aja
    Maddison, Chris J.
    Guez, Arthur
    Sifre, Laurent
    van den Driessche, George
    Schrittwieser, Julian
    Antonoglou, Ioannis
    Panneershelvam, Veda
    Lanctot, Marc
    Dieleman, Sander
    Grewe, Dominik
    Nham, John
    Kalchbrenner, Nal
    Sutskever, Ilya
    Lillicrap, Timothy
    Leach, Madeleine
    Kavukcuoglu, Koray
    Graepel, Thore
    Hassabis, Demis
    [J]. NATURE, 2016, 529 (7587) : 484 - +
  • [23] Simonyan K, 2015, Arxiv, DOI arXiv:1409.1556
  • [24] Sutton RS, 2000, ADV NEUR IN, V12, P1057
  • [25] Rethinking the Inception Architecture for Computer Vision
    Szegedy, Christian
    Vanhoucke, Vincent
    Ioffe, Sergey
    Shlens, Jon
    Wojna, Zbigniew
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2818 - 2826
  • [26] Szegedy C, 2015, PROC CVPR IEEE, P1, DOI 10.1109/CVPR.2015.7298594
  • [27] Energy management of hybrid electric bus based on deep reinforcement learning in continuous state and action space
    Tan, Huachun
    Zhang, Hailong
    Peng, Jiankun
    Jiang, Zhuxi
    Wu, Yuankai
    [J]. ENERGY CONVERSION AND MANAGEMENT, 2019, 195 : 548 - 560
  • [28] Tang X, 2021, IEEE Trans. Veh. Technol, DOI [DOI 10.1109/TVT.2021.3107734, 10.1109/TVT.2021.3107734]
  • [29] Battery Health-Aware and Deep Reinforcement Learning-Based Energy Management for Naturalistic Data-Driven Driving Scenarios
    Tang, Xiaolin
    Zhang, Jieming
    Pi, Dawei
    Lin, Xianke
    Grzesiak, Lech M.
    Hu, Xiaosong
    [J]. IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2022, 8 (01) : 948 - 964
  • [30] Tang XL, 2021, IEEE INTEL TRANSP SY, DOI [10.1109/MITS.2021.3093379, 10.1109/MITS.2021.3093370]