Toward Explainable End-to-End Driving Models via Simplified Objectification Constraints

被引:0
作者
Zhang, Chenkai [1 ]
Deguchi, Daisuke [1 ]
Chen, Jialei [1 ]
Murase, Hiroshi [1 ]
机构
[1] Nagoya Univ, Grad Sch Informat, Nagoya 4648601, Japan
基金
日本学术振兴会; 日本科学技术振兴机构;
关键词
Predictive models; Object detection; Task analysis; Prototypes; Proposals; Feature extraction; Computational modeling; Explainability; autonomous vehicles; deep learning; convolutional neural networks; NEURAL-NETWORK; EXPLANATIONS;
D O I
10.1109/TITS.2024.3385754
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The end-to-end driving models (E2EDMs) convert environmental information into driving actions using a complex transformation which makes E2EDMs have high prediction accuracy. Due to the black-box nature of transformation, the E2EDMs have low explainability. To solve this problem, explanation methods are used to generate explanations for observation. Based on current explanation methods, previous studies tried to further improve the explainability of E2EDMs by integrating an object detection module, however, these methods have many problems: Firstly, due to the requirement of the object detection module, they lack flexibility. Secondly, they neglect an essential property, i.e., simplicity, to improve explainability. In this paper, since humans prefer object-level and simple explanations in driving tasks, we argue that explainability is decided by two properties which are the objectification degree (the extent to which driving related-object features are utilized) and simplification degree (the simplicity of the explanation), thus we propose Simplified Objectification Branches (SOB) to improve the explainability of E2EDMs. Firstly, this structure could be integrated into any existing E2EDMs and thus have high flexibility. Secondly, the SOB explicitly improves the simplification degree without sacrificing the objectification degree of the explanations. By designing several indicators, i.e., heatmap satisfaction, driving action reproduction score, deception level, etc., we proved that SOB could help E2EDMs generate better explanations. Notably, the SOB could also further enhance E2EDMs' prediction accuracy.
引用
收藏
页码:14521 / 14534
页数:14
相关论文
共 72 条
[1]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[2]  
Beaudouin V., 2020, ARXIV
[3]   Driving behavior explanation with multi-level fusion [J].
Ben-Younes, Hedi ;
Zablocki, Eloi ;
Perez, Patrick ;
Cord, Matthieu .
PATTERN RECOGNITION, 2022, 123
[4]  
Bojarski Mariusz, 2016, arXiv
[5]  
Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
[6]  
Chen CF, 2019, ADV NEUR IN, V32
[7]   DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving [J].
Chen, Chenyi ;
Seff, Ari ;
Kornhauser, Alain ;
Xiao, Jianxiong .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2722-2730
[8]  
Cui XC, 2019, 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019)
[9]   Explaining Autonomous Driving by Learning End-to-End Visual Attention [J].
Cultrera, Luca ;
Seidenari, Lorenzo ;
Becattini, Federico ;
Pala, Pietro ;
Del Bimbo, Alberto .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :1389-1398
[10]  
Ding X., 2023, ARXIV