An Energy-Efficient Driving Method for Connected and Automated Vehicles Based on Reinforcement Learning

被引:2
作者
Min, Haitao [1 ]
Xiong, Xiaoyong [1 ]
Yang, Fang [2 ]
Sun, Weiyi [1 ]
Yu, Yuanbin [1 ]
Wang, Pengyu [1 ]
机构
[1] Jilin Univ, State Key Lab Automot Simulat & Control, Changchun 130012, Peoples R China
[2] China FAW Corp Ltd, Gen Res & Dev Inst, Changchun 130013, Peoples R China
基金
中国国家自然科学基金;
关键词
connected and automated vehicles; energy-efficient driving; reinforcement learning; long short-term memory; proximal policy optimization; MODEL-PREDICTIVE CONTROL; ELECTRIC VEHICLES;
D O I
10.3390/machines11020168
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The development of connected and automated vehicles (CAV) technology not only helps to reduce traffic accidents and improve traffic efficiency, but also has significant potential for energy saving and emission reduction. Using the dynamic traffic flow information around the vehicle to optimize the vehicle trajectory is conducive to improving the energy efficiency of the vehicle. Therefore, an energy-efficient driving method for CAVs based on reinforcement learning is proposed in this paper. Firstly, a set of vehicle trajectory prediction models based on long and short-term memory (LSTM) neural networks are developed, which integrate driving intention prediction and lane change time prediction to improve the prediction accuracy of surrounding vehicle trajectories. Secondly, an energy-efficient driving model is built based on Proximity Policy Optimization (PPO) reinforcement learning. The model takes the current states and predicted trajectories of surrounding vehicles as input information, and outputs energy-saving control variables while taking into account various constraints, such as safety, comfort, and travel efficiency. Finally, the method is tested by simulation on the NGSIM dataset, and the results show that the proposed method can save energy consumption by 9-22%.
引用
收藏
页数:20
相关论文
共 44 条
  • [1] From Reinforcement Learning to Deep Reinforcement Learning: An Overview
    Agostinelli, Forest
    Hocquet, Guillaume
    Singh, Sameer
    Baldi, Pierre
    [J]. BRAVERMAN READINGS IN MACHINE LEARNING: KEY IDEAS FROM INCEPTION TO CURRENT STATE, 2018, 11100 : 298 - 328
  • [2] Driving behaviour and trip condition effects on the energy consumption of an electric vehicle under real-world driving
    Al-Wreikat, Yazan
    Serrano, Clara
    Sodre, Jose Ricardo
    [J]. APPLIED ENERGY, 2021, 297
  • [3] Fuzzy-tuned model predictive control for dynamic eco-driving on hilly roads
    Bakibillah, A. S. M.
    Kamal, M. A. S.
    Tan, Chee Pin
    Hayakawa, Tomohisa
    Imura, Jun-ichi
    [J]. APPLIED SOFT COMPUTING, 2021, 99
  • [4] Chen GX, 2020, IEEE IMAGE PROC, P608, DOI 10.1109/ICIP40778.2020.9191332
  • [5] Machine Learning-Based Vehicle Trajectory Prediction Using V2V Communications and On-Board Sensors
    Choi, Dongho
    Yim, Janghyuk
    Baek, Minjin
    Lee, Sangsun
    [J]. ELECTRONICS, 2021, 10 (04) : 1 - 19
  • [6] GAFU: Using a Gamification Tool to Save Fuel
    Corcoba Magana, V.
    Munoz-Organero, M.
    [J]. IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2015, 7 (02) : 58 - 70
  • [7] Modeling Vehicle Interactions via Modified LSTM Models for Trajectory Prediction
    Dai, Shengzhe
    Li, Li
    Li, Zhiheng
    [J]. IEEE ACCESS, 2019, 7 : 38287 - 38296
  • [8] Deo N, 2018, IEEE INT VEH SYM, P1179, DOI 10.1109/IVS.2018.8500493
  • [9] Convolutional Social Pooling for Vehicle Trajectory Prediction
    Deo, Nachiket
    Trivedi, Mohan M.
    [J]. PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 1549 - 1557
  • [10] FHWA, NEXT GENERATION SIMU