Spatio-Temporal Perturbations for Video Attribution

被引:5
作者
Li, Zhenqiang [1 ]
Wang, Weimin [2 ]
Li, Zuoyue [3 ]
Huang, Yifei [1 ]
Sato, Yoichi [1 ]
机构
[1] Univ Tokyo, Inst Ind Sci, Tokyo 1538505, Japan
[2] Dalian Univ Technol, DUT RU Int Sch Informat Sci & Engn, Dalian 116024, Peoples R China
[3] Swiss Fed Inst Technol, Dept Comp Sci, CH-8092 Zurich, Switzerland
基金
日本科学技术振兴机构; 日本学术振兴会;
关键词
Measurement; Reliability; Task analysis; Spatiotemporal phenomena; Visualization; Heating systems; Perturbation methods; Video attribution; visual explanations; interpretable neural networks; spatio-temporal perturbations; video understanding; NEURAL-NETWORK; ATTENTION;
D O I
10.1109/TCSVT.2021.3081761
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The attribution method provides a direction for interpreting opaque neural networks in a visual way by identifying and visualizing the input regions/pixels that dominate the output of a network. Regarding the attribution method for visually explaining video understanding networks, it is challenging because of the unique spatiotemporal dependencies existing in video inputs and the special 3D convolutional or recurrent structures of video understanding networks. However, most existing attribution methods focus on explaining networks taking a single image as input and a few works specifically devised for video attribution come short of dealing with diversified structures of video understanding networks. In this paper, we investigate a generic perturbation-based attribution method that is compatible with diversified video understanding networks. Besides, we propose a novel regularization term to enhance the method by constraining the smoothness of its attribution results in both spatial and temporal dimensions. In order to assess the effectiveness of different video attribution methods without relying on manual judgement, we introduce reliable objective metrics which are checked by a newly proposed reliability measurement. We verified the effectiveness of our method by both subjective and objective evaluation and comparison with multiple significant attribution methods.
引用
收藏
页码:2043 / 2056
页数:14
相关论文
共 65 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]  
Anders CJ, 2019, Lecture Notes in Computer Science, P297, DOI DOI 10.1007/978-3-030-28954-616
[3]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[4]  
[Anonymous], 2016, ECCV
[5]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[6]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[7]  
Baehrens D, 2010, J MACH LEARN RES, V11, P1803
[8]  
Bargal SA, 2021, IEEE T PATTERN ANAL, V43, P4196, DOI [10.1109/TPAMI.2020.3054303, 10.1109/TPAMI.2021.3054303]
[9]   Excitation Backprop for RNNs [J].
Bargal, Sarah Adel ;
Zunino, Andrea ;
Kim, Donghyun ;
Zhang, Jianming ;
Murino, Vittorio ;
Sclaroff, Stan .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1440-1449
[10]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733