Exploring Decision Shifts in Autonomous Driving With Attribution-Guided Visualization

被引:0
作者
Shi, Rui [1 ,2 ]
Li, Tianxing [2 ]
Yamaguchi, Yasushi [3 ]
Zhang, Liguo [1 ,2 ]
机构
[1] Beijing Univ Technol, Sch Informat Sci & Technol, Beijing 100124, Peoples R China
[2] Beijing Univ Technol, Coll Comp Sci, Beijing 100124, Peoples R China
[3] Univ Tokyo, Dept Gen Syst Studies, Tokyo 1538902, Japan
基金
北京市自然科学基金; 日本学术振兴会; 中国国家自然科学基金;
关键词
Visualization; Autonomous vehicles; Decision making; Optimization; Computational modeling; Generators; Data visualization; Object recognition; Image coding; Generative adversarial networks; Autonomous driving; visualization explanation; decision attribution; generative adversarial networks;
D O I
10.1109/TITS.2024.3513400
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Given the critical need for more reliable autonomous driving systems, explainability has become a key focus within the research community. In autonomous driving models, even minor perception differences can significantly influence the decision-making process, and this impact often diverges markedly from human cognition. However, understanding the specific reasons why a model decides to stop or keep forward remains a significant challenge. This paper presents an attribution-guided visualization method aimed at exploring the triggers behind decision shifts, providing clear insights into the underlying "why" and "why not" of such decisions. We propose the cumulative layer fusion attribution method that identifies the parameters most critical to decision-making. These attributions are then used to inform the visualization optimization by applying attribution-guided weights to crucial generation parameters, ensuring that decision changes are driven only by modifications to critical information. Furthermore, we develop an indirect regularization method that increases visualization quality without necessitating additional hyperparameters. Experiments on large datasets demonstrate that our method produces insightful visualization explanations and outperforms state-of-the-art methods in both qualitative and quantitative evaluations.
引用
收藏
页码:4165 / 4177
页数:13
相关论文
empty
未找到相关数据