Explaining deep neural networks: A survey on the global interpretation methods

被引:54
|
作者
Saleem, Rabia [1 ]
Yuan, Bo [2 ]
Kurugollu, Fatih [1 ,3 ]
Anjum, Ashiq [2 ]
Liu, Lu [2 ]
机构
[1] Univ Derby, Sch Comp & Engn, Kedleston Rd, Derby DE22 1GB, England
[2] Univ Leicester, Sch Comp & Math Sci, Univ Rd, Leicester LE1 7RH, England
[3] Univ Sharjah, Dept Comp Sci, Sharjah, U Arab Emirates
关键词
Artificial intelligence; Deep neural networks; Black box Models; Explainable artificial intelligence; Global interpretation; BLACK-BOX; CLASSIFIERS; RULES; MODEL;
D O I
10.1016/j.neucom.2022.09.129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A substantial amount of research has been carried out in Explainable Artificial Intelligence (XAI) models, especially in those which explain the deep architectures of neural networks. A number of XAI approaches have been proposed to achieve trust in Artificial Intelligence (AI) models as well as provide explainability of specific decisions made within these models. Among these approaches, global interpretation methods have emerged as the prominent methods of explainability because they have the strength to explain every feature and the structure of the model. This survey attempts to provide a comprehensive review of global interpretation methods that completely explain the behaviour of the AI models. We present a taxonomy of the available global interpretations models and systematically highlight the critical features and algorithms that differentiate them from local as well as hybrid models of explainability. Through examples and case studies from the literature, we evaluate the strengths and weaknesses of the global interpretation models and assess challenges when these methods are put into practice. We conclude the paper by providing the future directions of research in how the existing challenges in global interpre-tation methods could be addressed and what values and opportunities could be realized by the resolution of these challenges.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:165 / 180
页数:16
相关论文
共 50 条
  • [21] Short-Term Traffic Prediction With Deep Neural Networks: A Survey
    Lee, Kyungeun
    Eo, Moonjung
    Jung, Euna
    Yoon, Yoonjin
    Rhee, Wonjong
    IEEE ACCESS, 2021, 9 : 54739 - 54756
  • [22] UAV sensor data applications with deep neural networks: A comprehensive survey
    Dudukcu, Hatice Vildan
    Taskiran, Murat
    Kahraman, Nihan
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
  • [23] Explaining Deep Face Algorithms Through Visualization: A Survey
    John, Thrupthi Ann
    Balasubramanian, Vineeth N.
    Jawahar, C. V.
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2024, 6 (01): : 15 - 29
  • [24] Video Summarization Using Deep Neural Networks: A Survey
    Apostolidis, Evlampios
    Adamantidou, Eleni
    Metsai, Alexandros, I
    Mezaris, Vasileios
    Patras, Ioannis
    PROCEEDINGS OF THE IEEE, 2021, 109 (11) : 1838 - 1863
  • [25] Survey on Deep Neural Networks in Speech and Vision Systems
    Alam, M.
    Samad, M. D.
    Vidyaratne, L.
    Glandon, A.
    Iftekharuddin, K. M.
    NEUROCOMPUTING, 2020, 417 : 302 - 321
  • [26] Efficient Processing of Deep Neural Networks: A Tutorial and Survey
    Sze, Vivienne
    Chen, Yu-Hsin
    Yang, Tien-Ju
    Emer, Joel S.
    PROCEEDINGS OF THE IEEE, 2017, 105 (12) : 2295 - 2329
  • [27] Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks
    Nazir, Sajid
    Dickson, Diane M.
    Akram, Muhammad Usman
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 156
  • [28] An Updated Survey of Efficient Hardware Architectures for Accelerating Deep Convolutional Neural Networks
    Capra, Maurizio
    Bussolino, Beatrice
    Marchisio, Alberto
    Shafique, Muhammad
    Masera, Guido
    Martina, Maurizio
    FUTURE INTERNET, 2020, 12 (07):
  • [29] Predicting and explaining nonlinear material response using deep physically guided neural networks with internal variables
    Ayensa-Jimenez, Jacobo
    Orera-Echeverria, Javier
    Doblare, Manuel
    MATHEMATICS AND MECHANICS OF SOLIDS, 2025, 30 (02) : 573 - 598
  • [30] Transparency of deep neural networks for medical image analysis: A review of interpretability methods
    Salahuddin, Zohaib
    Woodruff, Henry C.
    Chatterjee, Avishek
    Lambin, Philippe
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 140