Explaining deep neural networks: A survey on the global interpretation methods

被引:54
|
作者
Saleem, Rabia [1 ]
Yuan, Bo [2 ]
Kurugollu, Fatih [1 ,3 ]
Anjum, Ashiq [2 ]
Liu, Lu [2 ]
机构
[1] Univ Derby, Sch Comp & Engn, Kedleston Rd, Derby DE22 1GB, England
[2] Univ Leicester, Sch Comp & Math Sci, Univ Rd, Leicester LE1 7RH, England
[3] Univ Sharjah, Dept Comp Sci, Sharjah, U Arab Emirates
关键词
Artificial intelligence; Deep neural networks; Black box Models; Explainable artificial intelligence; Global interpretation; BLACK-BOX; CLASSIFIERS; RULES; MODEL;
D O I
10.1016/j.neucom.2022.09.129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A substantial amount of research has been carried out in Explainable Artificial Intelligence (XAI) models, especially in those which explain the deep architectures of neural networks. A number of XAI approaches have been proposed to achieve trust in Artificial Intelligence (AI) models as well as provide explainability of specific decisions made within these models. Among these approaches, global interpretation methods have emerged as the prominent methods of explainability because they have the strength to explain every feature and the structure of the model. This survey attempts to provide a comprehensive review of global interpretation methods that completely explain the behaviour of the AI models. We present a taxonomy of the available global interpretations models and systematically highlight the critical features and algorithms that differentiate them from local as well as hybrid models of explainability. Through examples and case studies from the literature, we evaluate the strengths and weaknesses of the global interpretation models and assess challenges when these methods are put into practice. We conclude the paper by providing the future directions of research in how the existing challenges in global interpre-tation methods could be addressed and what values and opportunities could be realized by the resolution of these challenges.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:165 / 180
页数:16
相关论文
共 50 条
  • [1] Explaining the black-box model: A survey of local interpretation methods for deep neural networks
    Liang, Yu
    Li, Siguang
    Yan, Chungang
    Li, Maozhen
    Jiang, Changjun
    NEUROCOMPUTING, 2021, 419 : 168 - 182
  • [2] Perturbation-based methods for explaining deep neural networks: A survey
    Ivanovs, Maksims
    Kadikis, Roberts
    Ozols, Kaspars
    PATTERN RECOGNITION LETTERS, 2021, 150 : 228 - 234
  • [3] Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
    Samek, Wojciech
    Montavon, Gregoire
    Lapuschkin, Sebastian
    Anders, Christopher J.
    Mueller, Klaus-Robert
    PROCEEDINGS OF THE IEEE, 2021, 109 (03) : 247 - 278
  • [4] A Survey of Sparse-learning Methods for Deep Neural Networks
    Ma, Rongrong
    Niu, Lingfeng
    2018 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE (WI 2018), 2018, : 647 - 650
  • [5] Interpretability of deep neural networks: A review of methods, classification and hardware
    Antamis, Thanasis
    Drosou, Anastasis
    Vafeiadis, Thanasis
    Nizamis, Alexandros
    Ioannidis, Dimosthenis
    Tzovaras, Dimitrios
    NEUROCOMPUTING, 2024, 601
  • [6] Explaining Deep Neural Networks in medical imaging context
    Rguibi, Zakaria
    Hajami, AbdelMajid
    Dya, Zitouni
    2021 IEEE/ACS 18TH INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND APPLICATIONS (AICCSA), 2021,
  • [7] Explaining Probabilistic Artificial Intelligence (AI) Models by Discretizing Deep Neural Networks
    Saleem, Rabia
    Yuan, Bo
    Kurugollu, Fatih
    Anjum, Ashiq
    2020 IEEE/ACM 13TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING (UCC 2020), 2020, : 446 - 448
  • [8] A Survey on Fuzzy Deep Neural Networks
    Das, Rangan
    Sen, Sagnik
    Maulik, Ujjwal
    ACM COMPUTING SURVEYS, 2020, 53 (03)
  • [9] A survey of uncertainty in deep neural networks
    Gawlikowski, Jakob
    Tassi, Cedrique Rovile Njieutcheu
    Ali, Mohsin
    Lee, Jongseok
    Humt, Matthias
    Feng, Jianxiang
    Kruspe, Anna
    Triebel, Rudolph
    Jung, Peter
    Roscher, Ribana
    Shahzad, Muhammad
    Yang, Wen
    Bamler, Richard
    Zhu, Xiao Xiang
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (SUPPL 1) : 1513 - 1589
  • [10] Deep Neural Networks on Chip - A Survey
    Huo Yingge
    Ali, Imran
    Lee, Kang-Yoon
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP 2020), 2020, : 589 - 592