Deep Reinforcement Learning-Based Speed Predictor for Distributionally Robust Eco-Driving

被引:0
作者
Chaudhary, Rajan [1 ]
Sharma, Nalin Kumar [2 ]
Kala, Rahul [3 ]
Singh, Sri Niwas [1 ]
机构
[1] ABV Indian Inst Informat Technol & Management, Dept Elect & Elect Engn, Gwalior 474015, India
[2] Indian Inst Technol Jammu, Dept Elect Engn, Jammu 180019, India
[3] ABV Indian Inst Informat Technol & Management, Ctr Autonomous Syst, Gwalior 474015, India
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Drag; Safety; Predictive models; Lead; Trajectory; Fuels; Force; Aerodynamics; Vectors; Uncertainty; Deep reinforcement learning; distributionally robust; eco-driving; heavy-duty vehicles; leading vehicle observer; model predictive control; COOPERATIVE ENERGY MANAGEMENT; HEAVY-DUTY VEHICLE; LOOK-AHEAD CONTROL; FUEL; PLATOON; ROADS; MODEL;
D O I
10.1109/ACCESS.2025.3530087
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes an eco-driving technique for an ego vehicle operating behind a non-communicating leading Heavy-Duty Vehicle (HDV), aimed at minimizing energy consumption while ensuring inter-vehicle distance. A novel data-driven approach based on Deep Reinforcement Learning (DRL) is developed to predict the future speed trajectory of the leading HDV using simulated speed profiles and road slope information. The DQN-based speed predictor achieves a prediction accuracy of 95.4% and 93.2% in Driving Cycles 1 and 2, respectively. This predicted speed is then used to optimize the ego vehicle's speed plan through a distributionally robust Model Predictive Controller (MPC), which accounts for uncertainties in the prediction, ensuring operational safety. The proposed method demonstrates energy savings of 12.5% in Driving Cycle 1 and 8.6% in Driving Cycle 2, compared to traditional leading vehicle speed prediction methods. Validated through case studies across simulated and real-world driving cycles, the solution is scalable, computationally efficient, and suitable for real-time applications in Intelligent Transportation Systems (ITS), making it a viable approach for enhancing sustainability in non-communicating vehicle environments.
引用
收藏
页码:13904 / 13918
页数:15
相关论文
共 50 条
[21]   Learning eco-driving strategies from human driving trajectories [J].
Shi, Xiaoyu ;
Zhang, Jian ;
Jiang, Xia ;
Chen, Juan ;
Hao, Wei ;
Wang, Bo .
PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2024, 633
[22]   LESY-ECO Learning system for eco-driving based on the imitation [J].
Corcoba Magana, V. ;
Munoz Organero, M. .
2014 INTERNATIONAL CONFERENCE ON CONNECTED VEHICLES AND EXPO (ICCVE), 2014, :351-356
[23]   Hybrid deep reinforcement learning based eco-driving for low-level connected and automated vehicles along signalized corridors [J].
Guo, Qiangqiang ;
Angah, Ohay ;
Liu, Zhijun ;
Ban, Xuegang .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2021, 124
[24]   Developing an eco-driving strategy in a hybrid traffic network using reinforcement learning [J].
Jamil, Umar ;
Malmir, Mostafa ;
Chen, Alan ;
Filipovska, Monika ;
Xie, Mimi ;
Ding, Caiwen ;
Jin, Yu-Fang .
SCIENCE PROGRESS, 2024, 107 (03)
[25]   DeepGrid: Robust Deep Reinforcement Learning-based Contingency Management [J].
Ghasemkhani, Amir ;
Darvishi, Atena ;
Niazazari, Iman ;
Darvishi, Azita ;
Livani, Hanif ;
Yang, Lei .
2020 IEEE POWER & ENERGY SOCIETY INNOVATIVE SMART GRID TECHNOLOGIES CONFERENCE (ISGT), 2020,
[26]   Optimization of Speed Trajectory for Eco-driving Considering Road Characteristics [J].
Kim, Kyunghyun ;
Lee, Heeyun ;
Song, Changhee ;
Kang, Changbeom ;
Cha, Suk Won .
2018 IEEE VEHICLE POWER AND PROPULSION CONFERENCE (VPPC), 2018,
[27]   Overcoming driving challenges in complex urban traffic: A multi-objective eco-driving strategy via safety model based reinforcement learning [J].
Li, Jie ;
Wu, Xiaodong ;
Fan, Jiawei ;
Liu, Yonggang ;
Xu, Min .
ENERGY, 2023, 284
[28]   Eco-Driving Cruise Control for 4WIMD-EVs Based on Receding Horizon Reinforcement Learning [J].
Zhang, Zhe ;
Ding, Haitao ;
Guo, Konghui ;
Zhang, Niaona .
ELECTRONICS, 2023, 12 (06)
[29]   Application and Evaluation of the Reinforcement Learning Approach to Eco-Driving at Intersections under Infrastructure-to-Vehicle Communications [J].
Shi, Junqing ;
Qiao, Fengxiang ;
Li, Qing ;
Yu, Lei ;
Hu, Yongju .
TRANSPORTATION RESEARCH RECORD, 2018, 2672 (25) :89-98
[30]   Learning based eco-driving strategy of connected electric vehicle at signalized intersection [J].
Zhuang W.-C. ;
Ding H.-N. ;
Dong H.-X. ;
Yin G.-D. ;
Wang X. ;
Zhou C.-B. ;
Xu L.-W. .
Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2023, 53 (01) :82-93