Explaining deep neural networks: A survey on the global interpretation methods

被引:54
|
作者
Saleem, Rabia [1 ]
Yuan, Bo [2 ]
Kurugollu, Fatih [1 ,3 ]
Anjum, Ashiq [2 ]
Liu, Lu [2 ]
机构
[1] Univ Derby, Sch Comp & Engn, Kedleston Rd, Derby DE22 1GB, England
[2] Univ Leicester, Sch Comp & Math Sci, Univ Rd, Leicester LE1 7RH, England
[3] Univ Sharjah, Dept Comp Sci, Sharjah, U Arab Emirates
关键词
Artificial intelligence; Deep neural networks; Black box Models; Explainable artificial intelligence; Global interpretation; BLACK-BOX; CLASSIFIERS; RULES; MODEL;
D O I
10.1016/j.neucom.2022.09.129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A substantial amount of research has been carried out in Explainable Artificial Intelligence (XAI) models, especially in those which explain the deep architectures of neural networks. A number of XAI approaches have been proposed to achieve trust in Artificial Intelligence (AI) models as well as provide explainability of specific decisions made within these models. Among these approaches, global interpretation methods have emerged as the prominent methods of explainability because they have the strength to explain every feature and the structure of the model. This survey attempts to provide a comprehensive review of global interpretation methods that completely explain the behaviour of the AI models. We present a taxonomy of the available global interpretations models and systematically highlight the critical features and algorithms that differentiate them from local as well as hybrid models of explainability. Through examples and case studies from the literature, we evaluate the strengths and weaknesses of the global interpretation models and assess challenges when these methods are put into practice. We conclude the paper by providing the future directions of research in how the existing challenges in global interpre-tation methods could be addressed and what values and opportunities could be realized by the resolution of these challenges.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:165 / 180
页数:16
相关论文
共 50 条
  • [41] Development of residual learning in deep neural networks for computer vision: A survey
    Xu, Guoping
    Wang, Xiaxia
    Wu, Xinglong
    Leng, Xuesong
    Xu, Yongchao
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 142
  • [42] Deep Neural Networks for Global Horizontal Irradiation Forecasting: A Comparative Study
    Arbelaez-Duque, Cristian
    Duque-Ciro, Alejandro
    Villa-Acevedo, Walter
    Jaramillo-Duque, Alvaro
    SMART CITIES, ICSC-CITIES 2022, 2023, 1706 : 77 - 91
  • [43] Current and Future Patterns of Global Wildfire Based on Deep Neural Networks
    Zhang, Guoli
    Wang, Ming
    Yang, Baolin
    Liu, Kai
    EARTHS FUTURE, 2024, 12 (02)
  • [44] Deep Neural Networks for Refining Vertical Modeling of Global Tropospheric Delay
    Yuan, Peng
    Balidakis, Kyriakos
    Wang, Jungang
    Xia, Pengfei
    Wang, Jian
    Zhang, Mingyuan
    Jiang, Weiping
    Schuh, Harald
    Wickert, Jens
    Deng, Zhiguo
    GEOPHYSICAL RESEARCH LETTERS, 2025, 52 (02)
  • [45] Deep Neural Networks for Ultrasound Beamforming
    Luchies, Adam
    Byram, Brett
    2017 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS), 2017,
  • [46] Comparison of Regularization Methods for ImageNet Classification with Deep Convolutional Neural Networks
    Smirnov, Evgeny A.
    Timoshenko, Denis M.
    Andrianov, Serge N.
    2ND AASRI CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND BIOINFORMATICS, 2014, 6 : 89 - 94
  • [47] Navigating beyond backpropagation: on alternative training methods for deep neural networks
    Birjais, Roshan
    Wang, Kevin I-Kai
    Abdulla, Waleed
    KNOWLEDGE AND INFORMATION SYSTEMS, 2025,
  • [48] Quantifying Explainability of Saliency Methods in Deep Neural Networks With a Synthetic Dataset
    Tjoa E.
    Guan C.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (04): : 858 - 870
  • [49] Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
    Capra, Maurizio
    Bussolino, Beatrice
    Marchisio, Alberto
    Masera, Guido
    Martina, Maurizio
    Shafique, Muhammad
    IEEE ACCESS, 2020, 8 : 225134 - 225180
  • [50] A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning
    Ali, Haider
    Chen, Dian
    Harrington, Matthew
    Salazar, Nathaniel
    Al Ameedi, Mohannad
    Khan, Ahmad Faraz
    Butt, Ali R.
    Cho, Jin-Hee
    IEEE ACCESS, 2023, 11 : 120095 - 120130