MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations

被引:19
作者
Yang, Qing [1 ]
Zhu, Xia [2 ]
Fwu, Jong-Kae [2 ]
Ye, Yun [1 ]
You, Ganmei [1 ]
Zhu, Yuan [1 ]
机构
[1] Intel Corp, Shanghai, Peoples R China
[2] Intel Corp, Santa Clara, CA USA
来源
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) | 2021年
关键词
D O I
10.1109/ICPR48806.2021.9413046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have recently been applied and used in many advanced and diverse tasks, such as medical diagnosis, automatic driving, etc. Due to the lack of transparency of the deep models, DNNs are often criticized for their prediction that cannot be explainable by human. In this paper, we propose a novel Morphological Fragmental Perturbation Pyramid (MFPP) method to solve the Explainable AI problem. In particular, we focus on the black-box scheme, which can identify the input area that is responsible for the output of the DNN without having to understand the internal architecture of the DNN. In the MFPP method, we divide the input image into multi-scale fragments and randomly mask out fragments as perturbation to generate a saliency map, which indicates the significance of each pixel for the prediction result of the black box model. Compared with the existing input sampling perturbation method, the pyramid structure fragment has proved to be more effective. It can better explore the morphological information of the input image to match its semantic information, and does not need any value inside the DNN. We qualitatively and quantitatively prove that MFPP meets and exceeds the performance of state-of-the-art (SOTA) black-box interpretation method on multiple DNN models and datasets.
引用
收藏
页码:1376 / 1383
页数:8
相关论文
共 30 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]   Grad-CAM plus plus : Generalized Gradient-based Visual Explanations for Deep Convolutional Networks [J].
Chattopadhay, Aditya ;
Sarkar, Anirban ;
Howlader, Prantik ;
Balasubramanian, Vineeth N. .
2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, :839-847
[3]  
Dabkowski P, 2017, ADV NEUR IN, V30
[4]   Techniques for Interpretable Machine Learning [J].
Du, Mengnan ;
Li, Ninghao ;
Hu, Xia .
COMMUNICATIONS OF THE ACM, 2020, 63 (01) :68-77
[5]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338
[6]   Understanding Deep Networks via Extremal Perturbations and Smooth Masks [J].
Fong, Ruth ;
Patrick, Mandela ;
Vedaldi, Andrea .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :2950-2958
[7]   Interpretable Explanations of Black Boxes by Meaningful Perturbation [J].
Fong, Ruth C. ;
Vedaldi, Andrea .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3449-3457
[8]  
Gan A., 2019, GOLDFISH SAMPLE
[9]   A Survey of Methods for Explaining Black Box Models [J].
Guidotti, Riccardo ;
Monreale, Anna ;
Ruggieri, Salvatore ;
Turin, Franco ;
Giannotti, Fosca ;
Pedreschi, Dino .
ACM COMPUTING SURVEYS, 2019, 51 (05)
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778