Quantifying Explainability of Saliency Methods in Deep Neural Networks With a Synthetic Dataset

被引:7
|
作者
Tjoa E. [1 ]
Guan C. [2 ]
机构
[1] Nanyang Technological University, Alibaba HealthTech Division, Singapore
[2] School of Computer Science and Engineering, Nanyang Technological University, Singapore
来源
IEEE Transactions on Artificial Intelligence | 2023年 / 4卷 / 04期
关键词
Blackbox; computer vision; deep neural network (DNN); explainable artificial intelligence (XAI);
D O I
10.1109/TAI.2022.3228834
中图分类号
学科分类号
摘要
Post-hoc analysis is a popular category in eXplainable artificial intelligence (XAI) study. In particular, methods that generate heatmaps have been used to explain the deep neural network (DNN), a black-box model. Heatmaps can be appealing due to the intuitive and visual ways to understand them but assessing their qualities might not be straightforward. Different ways to assess heatmaps' quality have their own merits and shortcomings. This article introduces a synthetic dataset that can be generated adhoc along with the ground-truth heatmaps for more objective quantitative assessment. Each sample data is an image of a cell with easily recognized features that are distinguished from localization ground-truth mask, hence facilitating a more transparent assessment of different XAI methods. Comparison and recommendations are made, shortcomings are clarified along with suggestions for future research directions to handle the finer details of select posthoc analysis methods. Furthermore, mabCAM is introduced as the heatmap generation method compatible with our ground-truth heatmaps. The framework is easily generalizable and uses only standard deep learning components. © 2020 IEEE.
引用
收藏
页码:858 / 870
页数:12
相关论文
共 50 条
  • [1] A Comparison of Saliency Methods for Deep Learning Explainability
    Konate, Salamata
    Lebrat, Leo
    Santa Cruz, Rodrigo
    Smith, Elliot
    Bradley, Andrew
    Fookes, Clinton
    Salvado, Olivier
    2021 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA 2021), 2021, : 454 - 461
  • [2] IMPLICIT SALIENCY IN DEEP NEURAL NETWORKS
    Sun, Yutong
    Prabhushankar, Mohit
    AlRegib, Ghassan
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2915 - 2919
  • [3] Explainability Methods for Graph Convolutional Neural Networks
    Pope, Phillip E.
    Kolouri, Soheil
    Rostami, Mohammad
    Martin, Charles E.
    Hoffmann, Heiko
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10764 - 10773
  • [4] An analysis of explainability methods for convolutional neural networks
    Vonder Haar, Lynn
    Elvira, Timothy
    Ochoa, Omar
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 117
  • [5] Robust Explainability A tutorial on gradient-based attribution methods for deep neural networks
    Nielsen, Ian E.
    Dera, Dimah
    Rasool, Ghulam
    Ramachandran, Ravi P.
    Bouaynaya, Nidhal Carla
    IEEE SIGNAL PROCESSING MAGAZINE, 2022, 39 (04) : 73 - 84
  • [6] VERIX: Towards Verified Explainability of Deep Neural Networks
    Wu, Min
    Wu, Haoze
    Barrett, Clark
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Explainability of deep neural networks for MRI analysis of brain tumors
    Zeineldin, Ramy A.
    Karar, Mohamed E.
    Elshaer, Ziad
    Coburger, Jan
    Wirtz, Christian R.
    Burgert, Oliver
    Mathis-Ullrich, Franziska
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2022, 17 (09) : 1673 - 1683
  • [8] Explainability of deep neural networks for MRI analysis of brain tumors
    Ramy A. Zeineldin
    Mohamed E. Karar
    Ziad Elshaer
    ·Jan Coburger
    Christian R. Wirtz
    Oliver Burgert
    Franziska Mathis-Ullrich
    International Journal of Computer Assisted Radiology and Surgery, 2022, 17 : 1673 - 1683
  • [9] Some Shades of Grey! - Interpretability and Explainability of Deep Neural Networks
    Dengel, Andreas
    PROCEEDINGS OF THE ACM WORKSHOP ON CROSSMODAL LEARNING AND APPLICATION (WCRML'19), 2019, : 1 - 1
  • [10] Quantifying safety risks of deep neural networks
    Xu, Peipei
    Ruan, Wenjie
    Huang, Xiaowei
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (04) : 3801 - 3818