Adaptive policy learning for data-driven powertrain control with eco-driving

被引:4
作者
Kerbel, Lindsey [1 ]
Ayalew, Beshah [1 ]
Ivanco, Andrej [2 ]
机构
[1] Clemson Univ, 4 Res Dr, Greenville, SC 29607 USA
[2] Allison Transmiss, One Allison Way, Indianapolis, IN 46222 USA
关键词
Powertrain control; Reinforcement learning; Eco-driving; Driver-assist; ENERGY MANAGEMENT;
D O I
10.1016/j.engappai.2023.106489
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modern powertrain control design practices rely on model-based approaches accompanied by costly cali-brations in order to meet ever stringent energy use and emissions targets specified for standardized drive cycles. These practices struggle to capture the complexity and uncertainty of real-world driving. However, the deluge of operational data now available with connected vehicle technology presents opportunities to foster data-driven control design methods that adapt the powertrain control systems to their field use conditions. While the most attractive of these methods is reinforcement learning (RL), it is rarely directly applicable in physical applications due to its challenges of learning efficiency (sample complexity) and guaranteeing safety. In this paper, we propose and evaluate an adaptive policy learning (APL) framework that leverages existing source policies shipped with vehicles to accelerate initial learning while recovering the asymptotic performance of a reinforcement learning-based powertrain control agent trained from scratch. We present a critique of related residual policy learning approaches and detail our algorithmic implementations for two versions of the proposed framework. We find that the APL powertrain control agents offer in the order of 10% fuel economy improvement over a default powertrain controller of a commercial vehicle without compromising driver accommodation metrics. We demonstrate that the APL frameworks offer a viable approach towards potentially applying RL for real-world scenarios by addressing its learning efficiency issues.
引用
收藏
页数:11
相关论文
共 48 条
[1]  
Abdolmaleki A, 2018, Arxiv, DOI arXiv:1806.06920
[2]  
Barekatain M, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P3108
[3]   Eco-driving: An overlooked climate change initiative [J].
Barkenbus, Jack N. .
ENERGY POLICY, 2010, 38 (02) :762-769
[4]   Estimating region-specific fuel economy in the United States from real-world driving cycles [J].
Borlaug, Brennan ;
Holden, Jacob ;
Wood, Eric ;
Lee, Byungho ;
Fink, Justin ;
Agnew, Scott ;
Lustbader, Jason .
TRANSPORTATION RESEARCH PART D-TRANSPORT AND ENVIRONMENT, 2020, 86
[5]   Stochastic Model Predictive Control With a Safety Guarantee for Automated Driving [J].
Brudigam, Tim ;
Olbrich, Michael ;
Wollherr, Dirk ;
Leibold, Marion .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (01) :22-36
[6]   Challenges of real-world reinforcement learning: definitions, benchmarks and analysis [J].
Dulac-Arnold, Gabriel ;
Levine, Nir ;
Mankowitz, Daniel J. ;
Li, Jerry ;
Paduraru, Cosmin ;
Gowal, Sven ;
Hester, Todd .
MACHINE LEARNING, 2021, 110 (09) :2419-2468
[7]   Driver Modeling and Implementation of a Fuel-saving ADAS [J].
Fleming, James M. ;
Yan, Xingda ;
Allison, Craig K. ;
Stanton, Neville A. ;
Lot, Roberto .
2018 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2018, :1233-1238
[8]  
Hafiz F, 2015, IEEE ENER CONV, P100, DOI 10.1109/ECCE.2015.7309675
[9]   Safe Reinforcement Learning for an Energy-Efficient Driver Assistance System [J].
Hailemichael, Habtamu ;
Ayalew, Beshah ;
Kerbel, Lindsey ;
Ivanco, Andrej ;
Loiselle, Keith .
IFAC PAPERSONLINE, 2022, 55 (37) :615-620
[10]   Computationally Efficient Algorithm for Eco-Driving Over Long Look-Ahead Horizons [J].
Hamednia, Ahad ;
Sharma, Nalin Kumar ;
Murgovski, Nikolce ;
Fredriksson, Jonas .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) :6556-6570