Spatio-Temporal Perturbations for Video Attribution

被引:5
作者
Li, Zhenqiang [1 ]
Wang, Weimin [2 ]
Li, Zuoyue [3 ]
Huang, Yifei [1 ]
Sato, Yoichi [1 ]
机构
[1] Univ Tokyo, Inst Ind Sci, Tokyo 1538505, Japan
[2] Dalian Univ Technol, DUT RU Int Sch Informat Sci & Engn, Dalian 116024, Peoples R China
[3] Swiss Fed Inst Technol, Dept Comp Sci, CH-8092 Zurich, Switzerland
基金
日本科学技术振兴机构; 日本学术振兴会;
关键词
Measurement; Reliability; Task analysis; Spatiotemporal phenomena; Visualization; Heating systems; Perturbation methods; Video attribution; visual explanations; interpretable neural networks; spatio-temporal perturbations; video understanding; NEURAL-NETWORK; ATTENTION;
D O I
10.1109/TCSVT.2021.3081761
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The attribution method provides a direction for interpreting opaque neural networks in a visual way by identifying and visualizing the input regions/pixels that dominate the output of a network. Regarding the attribution method for visually explaining video understanding networks, it is challenging because of the unique spatiotemporal dependencies existing in video inputs and the special 3D convolutional or recurrent structures of video understanding networks. However, most existing attribution methods focus on explaining networks taking a single image as input and a few works specifically devised for video attribution come short of dealing with diversified structures of video understanding networks. In this paper, we investigate a generic perturbation-based attribution method that is compatible with diversified video understanding networks. Besides, we propose a novel regularization term to enhance the method by constraining the smoothness of its attribution results in both spatial and temporal dimensions. In order to assess the effectiveness of different video attribution methods without relying on manual judgement, we introduce reliable objective metrics which are checked by a newly proposed reliability measurement. We verified the effectiveness of our method by both subjective and objective evaluation and comparison with multiple significant attribution methods.
引用
收藏
页码:2043 / 2056
页数:14
相关论文
共 65 条
[31]  
Kurakin A., 2017, INT C LEARNING REPRE, P1
[32]   RESOUND: Towards Action Recognition Without Representation Bias [J].
Li, Yingwei ;
Li, Yi ;
Vasconcelos, Nuno .
COMPUTER VISION - ECCV 2018, PT VI, 2018, 11210 :520-535
[33]   Superpixel-Based Spatiotemporal Saliency Detection [J].
Liu, Zhi ;
Zhang, Xiang ;
Luo, Shuhua ;
Le Meur, Olivier .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2014, 24 (09) :1522-1540
[34]   ML-HDP: A Hierarchical Bayesian Nonparametric Model for Recognizing Human Actions in Video [J].
Nguyen Anh Tu ;
Thien Huynh-The ;
Khan, Kifayat Ullah ;
Lee, Young-Koo .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (03) :800-814
[35]   STAP: Spatial-Temporal Attention-Aware Pooling for Action Recognition [J].
Nguyen, Tam V. ;
Song, Zheng ;
Yan, Shuicheng .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2015, 25 (01) :77-86
[36]  
Petsiuk V., 2018, P BRIT MACHINE VISIO, P151
[37]  
Qi Z., 2019, CVPR WORKSHOPS, V2, P1, DOI DOI 10.48550/ARXIV.1905.00954
[38]   There and Back Again: Revisiting Backpropagation Saliency Methods [J].
Rebuffi, Sylvestre-Alvise ;
Fong, Ruth ;
Ji, Xu ;
Vedaldi, Andrea .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :8836-8845
[39]   "Why Should I Trust You?" Explaining the Predictions of Any Classifier [J].
Ribeiro, Marco Tulio ;
Singh, Sameer ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :1135-1144
[40]  
Rieger L., 2020, ARXIV200308747