A Novel Experience Replay-Based Offline Deep Reinforcement Learning for Energy Management of Hybrid Electric Vehicles

被引:1
作者
Niu, Zegong [1 ,2 ]
Wu, Jingda [3 ]
He, Hongwen [1 ,2 ]
机构
[1] Beijing Inst Technol, Natl Engn Lab Elect Vehicles, Beijing 100081, Peoples R China
[2] Beijing Inst Technol, Sch Mech Engn, Beijing 100081, Peoples R China
[3] Hong Kong Polytech Univ, Dept Ind & Syst Engn, Hung Hom, Kowloon, Hong Kong 999077, Peoples R China
基金
中国国家自然科学基金;
关键词
Energy management; Engines; Hybrid electric vehicles; Training; Torque; Data models; Cloning; Q-learning; Perturbation methods; Optimization; Energy management strategy; experience replay; hybrid electric vehicles; offline deep reinforcement learning;
D O I
10.1109/TIE.2024.3511032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the deep reinforcement learning (DRL) techniques being extensively studied in developing energy management strategies (EMS) for hybrid electric vehicles (HEVs), the trial-and-error learning characteristic of conventional DRL requires interactive data collection to obtain the qualified strategy, which is infeasibility in the real world due to the unacceptable exploratory actions' costs. Offline DRL-based EMS is promising to improve the practicality as it could improve itself via offline collected data. However, the performance of existing offline DRL solutions is readily degraded due to the varying quality of training data. To overcome this problem, in this article, we propose a novel offline experience replay method to improve the adaptiveness and robustness of imperfect data and therefore enhance the energy-saving performance of associated EMSs. Specifically, the possibility of a specific piece of data being used is automatically tuned based on its contribution to the value function of DRL. A range of prevailing offline DRL algorithms have been used to validate the proposed experience replay method. Validation results suggest that the proposed method successfully broadly boosted all involved offline DRL algorithms in improving the EMS's fuel economy. A hardware-in-the-loop test is also conducted to validate the reliability of the method. The proposed method is promising to improve the practicability of offline DRL in EMS for HEVs and broader fields.
引用
收藏
页码:7160 / 7169
页数:10
相关论文
共 35 条
[1]  
Alshiekh M, 2018, AAAI CONF ARTIF INTE, P2669
[2]  
Fu Y., 2021, BENCHMARKING SAMPLE
[3]  
Fujimoto S, 2021, ADV NEUR IN, V34
[4]  
Fujimoto S, 2019, PR MACH LEARN RES, V97
[5]   A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution [J].
Ganesh, Akhil Hannegudda ;
Xu, Bin .
RENEWABLE & SUSTAINABLE ENERGY REVIEWS, 2022, 154
[6]   Deep reinforcement learning based energy management strategies for electrified vehicles: Recent advances and perspectives [J].
He, Hongwen ;
Meng, Xiangfei ;
Wang, Yong ;
Khajepour, Amir ;
An, Xiaowen ;
Wang, Renguang ;
Sun, Fengchun .
RENEWABLE & SUSTAINABLE ENERGY REVIEWS, 2024, 192
[7]   Energy management optimization for connected hybrid electric vehicle using offline reinforcement learning [J].
He, Hongwen ;
Niu, Zegong ;
Wang, Yong ;
Huang, Ruchen ;
Shou, Yiwen .
JOURNAL OF ENERGY STORAGE, 2023, 72
[8]   A Hybrid Algorithm Combining Data-Driven and Simulation-Based Reinforcement Learning Approaches to Energy Management of Hybrid Electric Vehicles [J].
Hu, Bo ;
Zhang, Sunan ;
Liu, Bocheng .
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2024, 10 (01) :1257-1273
[9]   A Data-Driven Solution for Energy Management Strategy of Hybrid Electric Vehicles Based on Uncertainty-Aware Model-Based Offline Reinforcement Learning [J].
Hu, Bo ;
Xiao, Yang ;
Zhang, Sunan ;
Liu, Bocheng .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (06) :7709-7719
[10]   A Deployment-Efficient Energy Management Strategy for Connected Hybrid Electric Vehicle Based on Offline Reinforcement Learning [J].
Hu, Bo ;
Li, Jiaxi .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (09) :9644-9654