Explainable reinforcement learning for powertrain control engineering

被引:1
作者
Laflamme, C. [1 ]
Doppler, J. [2 ]
Palvolgyi, B. [3 ]
Dominka, S. [2 ]
Viharos, Zs. J. [3 ,4 ]
Haeussler, S. [5 ]
机构
[1] Fraunhofer Austria Res GmbH, Vienna, Austria
[2] Robert Bosch AG, Bosch Engn, Vienna, Austria
[3] HUN REN Inst Comp Sci & Control SZTAK, Ctr Excellence Hungarian Acad Sci MTA, Budapest, Hungary
[4] John von Neumann Univ, Fac Econ & Business, Kecskemet, Hungary
[5] Univ Innsbruck, Dept Informat Syst Prod & Logist Management, Innsbruck, Austria
基金
欧盟地平线“2020”;
关键词
Reinforcement learning; Explainable artificial intelligence; Powertrain control; HYBRID ELECTRIC VEHICLE; ENERGY MANAGEMENT; RULES;
D O I
10.1016/j.engappai.2025.110135
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper we demonstrate a practical post-hoc approach for explainable reinforcement learning (RL) in vehicle powertrain control. The goal is to exploit the advantages of RL yet obtain a solution that is feasible to implement in safety-critical control engineering problems. This means finding a solution that balances optimal product design with the required engineering effort, while maintaining the transparency necessary for safety- critical applications. Our method is based on initially training a neural network based RL policy and converting it into a look-up table, using a decision tree (DT) as an intermediary. The DT is limited to a certain depth, resulting in a look-up table of manageable size that can be directly tested, implemented and evaluated by control engineers. In order to evaluate this approach, a set of RL expert policies were used to train DTs with increasing depth, showing the regions where the DT solution can outperform benchmarks while still remaining small enough to translate to a manageable look-up table. Our approach involves standard Python libraries, lowering the barrier for implementation. This approach is not just relevant to powertrain control, but offers a practical approach for all regulated domains which could benefit from application of RL.
引用
收藏
页数:12
相关论文
共 79 条
[11]  
Chen IM, 2019, IEEE INT C INTELL TR, P2620, DOI 10.1109/ITSC.2019.8917076
[12]  
Coppens Y., 2019, P IJCAI 2019 WORKSH, P1
[13]   Evolving interpretable decision trees for reinforcement learning [J].
Costa, Vinicius G. ;
Perez-Aracil, Jorge ;
Salcedo-Sanz, Sancho ;
Pedreira, Carlos E. .
ARTIFICIAL INTELLIGENCE, 2024, 327
[14]   Enhanced Oblique Decision Tree Enabled Policy Extraction for Deep Reinforcement Learning in Power System Emergency Control [J].
Dai, Yuxin ;
Chen, Qimei ;
Zhang, Jun ;
Wang, Xiaohui ;
Chen, Yilin ;
Gao, Tianlu ;
Xu, Peidong ;
Chen, Siyuan ;
Liao, Siyang ;
Jiang, Huaiguang ;
Gao, David Wen-zhong .
ELECTRIC POWER SYSTEMS RESEARCH, 2022, 209
[15]   Learning to Drive (Efficiently) [J].
Dominka, Sven ;
Doppler, Joerg ;
Smith, Henrik ;
Litschauer, Teresa ;
Laflamme, Catherine .
2024 IEEE INTERNATIONAL CONFERENCE ON ELECTRO INFORMATION TECHNOLOGY, EIT 2024, 2024, :111-116
[16]   Independent driving pattern factors and their influence on fuel-use and exhaust emission factors [J].
Ericsson, E .
TRANSPORTATION RESEARCH PART D-TRANSPORT AND ENVIRONMENT, 2001, 6 (05) :325-345
[17]   Deep learning in the development of energy Management strategies of hybrid electric Vehicles: A hybrid modeling approach [J].
Estrada, Pedro Maroto ;
de Lima, Daniela ;
Bauer, Peter H. ;
Mammetti, Marco ;
Bruno, Joan Carles .
APPLIED ENERGY, 2023, 329
[18]   Fuel consumption and CO2 emissions from passenger cars in Europe Laboratory versus real-world emissions [J].
Fontaras, Georgios ;
Zacharof, Nikiforos-Georgios ;
Ciuffo, Biagio .
PROGRESS IN ENERGY AND COMBUSTION SCIENCE, 2017, 60 :97-131
[19]  
Frosst N, 2017, Arxiv, DOI arXiv:1711.09784
[20]   Systematic hyperparameter selection in Machine Learning-based engine control to minimize calibration effort [J].
Garg, Prasoon ;
Silvas, Emilia ;
Willems, Frank .
CONTROL ENGINEERING PRACTICE, 2023, 140