SVCE: Shapley Value Guided Counterfactual Explanation for Machine Learning-Based Autonomous Driving

被引:3
作者
Li, Meng [1 ]
Sun, Hengyang [2 ]
Chen, Hong [1 ]
Huang, Yanjun [2 ]
机构
[1] Tongji Univ, Dept Control Sci & Engn, Shanghai 200092, Peoples R China
[2] Tongji Univ, Clean Energy Automot Engn Ctr, Shanghai 200092, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous vehicles; Predictive models; Data models; Manuals; Optimization; Computational modeling; Analytical models; Explainable artificial intelligence; shapley value; machine learning; counterfactual explanation; autonomous driving;
D O I
10.1109/TITS.2024.3393634
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The explainability of complex machine-learning models is becoming increasingly significant in safety-critical domains such as autonomous driving. In this context, counterfactual explanation (CE), as an effective explainability method in explainable artificial intelligence, plays an important role. It aims to identify minimal alterations to input that can change the model's output, thereby revealing key factors influencing model decisions. However, generating counterfactual samples might involve manually selecting input features, potentially leading to suboptimal and biased explanations. This study introduces a feature contribution guided CE generation framework to address this issue. Our method utilizes feature contributions based on Shapley values to guide the model's focus on the most influential features. This enables end-users to quickly pinpoint the search direction in generating CEs (e.g., prioritizing the most critical features) and producing representative CEs. To comprehensively evaluate our method, we conducted experimental validation on two representative machine learning models: autonomous driving decision-making using Deep Q-Network and lane-changing prediction using deep learning. In addition, we conducted a user-centered study to evaluate the practical applicability of the SVCE in autonomous driving scenarios, which serves as a crucial validation of the presented SVCE. The results show that SVCE can help users understand and diagnose the model.
引用
收藏
页码:14905 / 14916
页数:12
相关论文
共 31 条
[1]  
Algaba E, 2020, C-H CRC SER OPER RES, P1, DOI 10.1201/9781351241410
[2]  
Chen CF, 2019, ADV NEUR IN, V32
[3]  
Dosovitskiy A, 2017, PR MACH LEARN RES, V78
[4]   Dense reinforcement learning for safety validation of autonomous vehicles [J].
Feng, Shuo ;
Sun, Haowei ;
Yan, Xintao ;
Zhu, Haojie ;
Zou, Zhengxia ;
Shen, Shengyin ;
Liu, Henry X. .
NATURE, 2023, 615 (7953) :620-+
[5]   Shapley Values for Feature Selection: The Good, the Bad, and the Axioms [J].
Fryer, Daniel ;
Strumke, Inga ;
Nguyen, Hien .
IEEE ACCESS, 2021, 9 :144352-144360
[6]   Explainable Deep Reinforcement Learning for UAV autonomous path planning [J].
He, Lei ;
Aouf, Nabil ;
Song, Bifeng .
AEROSPACE SCIENCE AND TECHNOLOGY, 2021, 118
[7]  
Heskes Tom, 2020, Adv. Neural Inf. Proces. Syst., V33, P4778, DOI DOI 10.5555/3495724.3496125
[8]   Explainability in deep reinforcement learning [J].
Heuillet, Alexandre ;
Couthouis, Fabien ;
Diaz-Rodriguez, Natalia .
KNOWLEDGE-BASED SYSTEMS, 2021, 214 (214)
[9]   A Survey on Trajectory-Prediction Methods for Autonomous Driving [J].
Huang, Yanjun ;
Du, Jiatong ;
Yang, Ziru ;
Zhou, Zewei ;
Zhang, Lin ;
Chen, Hong .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2022, 7 (03) :652-674
[10]   Perturbation-based methods for explaining deep neural networks: A survey [J].
Ivanovs, Maksims ;
Kadikis, Roberts ;
Ozols, Kaspars .
PATTERN RECOGNITION LETTERS, 2021, 150 :228-234