Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer

被引:1
作者
Wu, Junyi [1 ]
Duan, Bin [1 ]
Kang, Weitai [1 ]
Tang, Hao [2 ]
Yan, Yan [1 ]
机构
[1] IIT, Dept Comp Sci, Chicago, IL 60616 USA
[2] Carnegie Mellon Univ, Robot Inst, Pittsburgh, PA 15213 USA
来源
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2024年
关键词
D O I
10.1109/CVPR52733.2024.01039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While Transformers have rapidly gained popularity in various computer vision applications, post-hoc explanations of their internal mechanisms remain largely unexplored. Vision Transformers extract visual information by representing image regions as transformed tokens and integrating them via attention weights. However, existing post-hoc explanation methods merely consider these attention weights, neglecting crucial information from the transformed tokens, which fails to accurately illustrate the rationales behind the models' predictions. To incorporate the influence of token transformation into interpretation, we propose TokenTM, a novel post-hoc explanation method that utilizes our introduced measurement of token transformation effects. Specifically, we quantify token transformation effects by measuring changes in token lengths and correlations in their directions pre- and post-transformation. Moreover, we develop initialization and aggregation rules to integrate both attention weights and token transformation effects across all layers, capturing holistic token contributions throughout the model. Experimental results on segmentation and perturbation tests demonstrate the superiority of our proposed TokenTM compared to state-of-the-art Vision Transformer explanation methods.
引用
收藏
页码:10926 / 10935
页数:10
相关论文
共 52 条
[1]  
Abnar S., 2020, ACL
[2]  
Agarwal Chirag, 2022, NEURIPS
[3]  
Ali Ameen, 2022, ICML
[4]  
[Anonymous], 2017, Pattern recognition, DOI DOI 10.1016/J.PATCOG.2016.11.008
[5]  
[Anonymous], 2021, AAAI
[6]  
[Anonymous], 2015, ICLRW
[7]  
Atanasova Pepa, 2020, EMNLP
[8]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[9]  
Barkan Oren, 2021, CIKM
[10]  
Barkan Oren, 2023, ICCV