Generalizing Adversarial Explanations with Grad-CAM

被引:8
作者
Chakraborty, Tanmay [1 ]
Trehan, Utkarsh [1 ]
Mallat, Khawla [1 ,2 ]
Dugelay, Jean-Luc [1 ]
机构
[1] EURECOM, Campus SophiaTech,450 Route Chappes, F-06410 Biot, France
[2] SAP Secur Res Labs France, Biot, France
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022 | 2022年
关键词
D O I
10.1109/CVPRW56347.2022.00031
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Gradient-weighted Class Activation Mapping (Grad-CAM), is an example-based explanation method that provides a gradient activation heat map as an explanation for Convolution Neural Network (CNN) models. The drawback of this method is that it cannot be used to generalize CNN behaviour. In this paper, we present a novel method that extends Grad-CAM from example-based explanations to a method for explaining global model behaviour. This is achieved by introducing two new metrics, (i) Mean Observed Dissimilarity (MOD) and (ii) Variation in Dissimilarity (VID), for model generalization. These metrics are computed by comparing a Normalized Inverted Structural Similarity Index (NISSIM) metric of the Grad-CAM generated heatmap for samples from the original test set and samples from the adversarial test set. For our experiment, we study adversarial attacks on deep models such as VGG16, ResNet50, and ResNet101, and wide models such as InceptionNetv3 and XceptionNet using Fast Gradient Sign Method (FGSM). We then compute the metrics MOD and VID for the automatic face recognition (AFR) use case with the VGGFace2 dataset. We observe a consistent shift in the region highlighted in the Grad-CAM heatmap, reflecting its participation to the decision making, across all models under adversarial attacks. The proposed method can be used to understand adversarial attacks and explain the behaviour of black box CNN models for image analysis.
引用
收藏
页码:186 / 192
页数:7
相关论文
共 27 条
[21]  
Simonyan K, 2015, Arxiv, DOI [arXiv:1409.1556, DOI 10.48550/ARXIV.1409.1556]
[22]  
Szegedy C, 2014, Arxiv, DOI [arXiv:1312.6199, DOI 10.1109/CVPR.2015.7298594]
[23]   Rethinking the Inception Architecture for Computer Vision [J].
Szegedy, Christian ;
Vanhoucke, Vincent ;
Ioffe, Sergey ;
Shlens, Jon ;
Wojna, Zbigniew .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2818-2826
[24]  
Vakhshiteh F, 2021, Arxiv, DOI arXiv:2007.11709
[25]  
Wu BX, 2021, ADV NEUR IN, V34
[26]   Feature Denoising for Improving Adversarial Robustness [J].
Xie, Cihang ;
Wu, Yuxin ;
van der Maaten, Laurens ;
Yuille, Alan ;
He, Kaiming .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :501-509
[27]   Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity [J].
Zhang, Chongzhi ;
Liu, Aishan ;
Liu, Xianglong ;
Xu, Yitao ;
Yu, Hang ;
Ma, Yuqing ;
Li, Tianlin .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :1291-1304