Interpretability research of deep learning: A literature survey

被引:10
作者
Xu, Biao [1 ]
Yang, Guanci [1 ]
机构
[1] Guizhou Univ, Key Lab Adv Mfg Technol, Minist Educ, Guiyang 550025, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Interpretability; Active explanations; Passive explanations; Explainable artificial intelligence; NEURAL-NETWORKS; EXPLANATIONS; SENSITIVITY; SYSTEMS; PREDICTION; FRAMEWORK; ACCURACY; MODELS;
D O I
10.1016/j.inffus.2024.102721
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning (DL) has been widely used in various fields. However, its black-box nature limits people's understanding and trust in its decision-making process. Therefore, it becomes crucial to research the DL interpretability, which can elucidate the model's decision-making processes and behaviors. This review provides an overview of the current status of interpretability research. First, the DL's typical models, principles, and applications are introduced. Then, the definition and significance of interpretability are clarified. Subsequently, some typical interpretability algorithms are introduced into four groups: active, passive, supplementary, and integrated explanations. After that, several evaluation indicators for interpretability are briefly described, and the relationship between interpretability and model performance is explored. Next, the specific applications of some interpretability methods/models in actual scenarios are introduced. Finally, the interpretability research challenges and future development directions are discussed.
引用
收藏
页数:46
相关论文
共 316 条
  • [1] Agarwal Rishabh, 2021, Advances in Neural Information Processing Systems, V34
  • [2] Use of ChatGPT: What does it mean for biology and environmental science?
    Agathokleous, Evgenios
    Saitanis, Costas J.
    Fang, Chao
    Yu, Zhen
    [J]. SCIENCE OF THE TOTAL ENVIRONMENT, 2023, 888
  • [3] Towards an Interpretable Autoencoder: A Decision-Tree-Based Autoencoder and its Application in Anomaly Detection
    Aguilar, Diana Laura
    Medina-Perez, Miguel Angel
    Loyola-Gonzalez, Octavio
    Choo, Kim-Kwang Raymond
    Bucheli-Susarrey, Edoardo
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (02) : 1048 - 1059
  • [4] AHMAD N., ReaearchGate, DOI [10.13140/RG.2.2.32696.35840, DOI 10.13140/RG.2.2.32696.35840]
  • [5] AHMADIAN A ..., 2024, Adv. Neural Inf. Process. Syst., V36, DOI [10.48550/arXiv.2305.19268, DOI 10.48550/ARXIV.2305.19268]
  • [6] SafeFac: Video-based smart safety monitoring for preventing industrial work accidents
    Ahn, Jungmo
    Park, JaeYeon
    Lee, Sung Sik
    Lee, Kyu-Hyuk
    Do, Heesung
    Ko, JeongGil
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 215
  • [7] Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation
    Ai, Qingyao
    Azizi, Vahid
    Chen, Xu
    Zhang, Yongfeng
    [J]. ALGORITHMS, 2018, 11 (09)
  • [8] A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion
    Albahri, A. S.
    Duhaim, Ali M.
    Fadhel, Mohammed A.
    Alnoor, Alhamzah
    Baqer, Noor S.
    Alzubaidi, Laith
    Albahri, O. S.
    Alamoodi, A. H.
    Bai, Jinshuai
    Salhi, Asma
    Santamaria, Jose
    Ouyang, Chun
    Gupta, Ashish
    Gu, Yuantong
    Deveci, Muhammet
    [J]. INFORMATION FUSION, 2023, 96 : 156 - 191
  • [9] Harris Hawks Sparse Auto-Encoder Networks for Automatic Speech Recognition System
    Ali, Mohammed Hasan
    Jaber, Mustafa Musa
    Abd, Sura Khalil
    Rehman, Amjad
    Awan, Mazhar Javed
    Vitkute-Adzgauskiene, Daiva
    Damasevicius, Robertas
    Bahaj, Saeed Ali
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [10] Permutation importance: a corrected feature importance measure
    Altmann, Andre
    Tolosi, Laura
    Sander, Oliver
    Lengauer, Thomas
    [J]. BIOINFORMATICS, 2010, 26 (10) : 1340 - 1347