Toward Explainable End-to-End Driving Models via Simplified Objectification Constraints

被引:0
作者
Zhang, Chenkai [1 ]
Deguchi, Daisuke [1 ]
Chen, Jialei [1 ]
Murase, Hiroshi [1 ]
机构
[1] Nagoya Univ, Grad Sch Informat, Nagoya 4648601, Japan
基金
日本学术振兴会; 日本科学技术振兴机构;
关键词
Predictive models; Object detection; Task analysis; Prototypes; Proposals; Feature extraction; Computational modeling; Explainability; autonomous vehicles; deep learning; convolutional neural networks; NEURAL-NETWORK; EXPLANATIONS;
D O I
10.1109/TITS.2024.3385754
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The end-to-end driving models (E2EDMs) convert environmental information into driving actions using a complex transformation which makes E2EDMs have high prediction accuracy. Due to the black-box nature of transformation, the E2EDMs have low explainability. To solve this problem, explanation methods are used to generate explanations for observation. Based on current explanation methods, previous studies tried to further improve the explainability of E2EDMs by integrating an object detection module, however, these methods have many problems: Firstly, due to the requirement of the object detection module, they lack flexibility. Secondly, they neglect an essential property, i.e., simplicity, to improve explainability. In this paper, since humans prefer object-level and simple explanations in driving tasks, we argue that explainability is decided by two properties which are the objectification degree (the extent to which driving related-object features are utilized) and simplification degree (the simplicity of the explanation), thus we propose Simplified Objectification Branches (SOB) to improve the explainability of E2EDMs. Firstly, this structure could be integrated into any existing E2EDMs and thus have high flexibility. Secondly, the SOB explicitly improves the simplification degree without sacrificing the objectification degree of the explanations. By designing several indicators, i.e., heatmap satisfaction, driving action reproduction score, deception level, etc., we proved that SOB could help E2EDMs generate better explanations. Notably, the SOB could also further enhance E2EDMs' prediction accuracy.
引用
收藏
页码:14521 / 14534
页数:14
相关论文
共 72 条
[21]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269
[22]   CCNet: Criss-Cross Attention for Semantic Segmentation [J].
Huang, Zilong ;
Wang, Xinggang ;
Huang, Lichao ;
Huang, Chang ;
Wei, Yunchao ;
Liu, Wenyu .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :603-612
[23]  
Huysmans J., 2006, 0612 FETEW KBI
[24]  
Jin B., 2023, arXiv
[25]   Textual Explanations for Self-Driving Vehicles [J].
Kim, Jinkyu ;
Rohrbach, Anna ;
Darrell, Trevor ;
Canny, John ;
Akata, Zeynep .
COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 :577-593
[26]   Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention [J].
Kim, Jinkyu ;
Canny, John .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2961-2969
[27]  
Kulesza T., 2015, P 20 INT C INT US IN, P126, DOI [10.1145/2678025.2701399, DOI 10.1145/2678025.2701399]
[28]  
Kulesza T, 2013, S VIS LANG HUM CEN C, P3, DOI 10.1109/VLHCC.2013.6645235
[29]  
Lage I, 2019, ARXIV
[30]   TRUST, CONTROL STRATEGIES AND ALLOCATION OF FUNCTION IN HUMAN MACHINE SYSTEMS [J].
LEE, J ;
MORAY, N .
ERGONOMICS, 1992, 35 (10) :1243-1270