An analysis of explainability methods for convolutional neural networks

被引:46
作者
Vonder Haar, Lynn [1 ]
Elvira, Timothy [1 ]
Ochoa, Omar [1 ]
机构
[1] Embry Riddle Aeronaut Univ, Dept Elect Engn & Comp Sci, 1 Aerosp Blvd, Daytona Beach, FL 32114 USA
关键词
Explainability; Black box model; Convolutional neural network; Image recognition; High-risk fields; Safety-critical fields;
D O I
10.1016/j.engappai.2022.105606
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models have gained a reputation of high accuracy in many domains. Convolutional Neural Networks (CNN) are specialized towards image recognition and have high accuracy in classifying objects within images. However, CNNs are an example of a black box model, meaning that experts are unsure how they work internally to reach a classification decision. Without knowing the reasoning behind a decision, there is low confidence that CNNs will continue to make accurate decisions, so it is unsafe to use them in high-risk or safety-critical fields without first developing methods to explain their decisions. This paper is a survey and analysis of the available explainability methods for showing the reasoning behind CNN decisions.
引用
收藏
页数:22
相关论文
共 88 条
[81]   Fuzzy logic equals Computing with words [J].
Zadeh, LA .
IEEE TRANSACTIONS ON FUZZY SYSTEMS, 1996, 4 (02) :103-111
[82]  
Zafar M. R., 2019, ARXIV
[83]  
Zhang Q., 2017, Interpretable convolutional neural networks
[84]  
Zhang Q, 2018, arXiv
[85]   Interpreting CNNs via Decision Trees [J].
Zhang, Quanshi ;
Yang, Yu ;
Ma, Haotian ;
Wu, Ying Nian .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :6254-6263
[86]  
Zhang QS, 2018, AAAI CONF ARTIF INTE, P4454
[87]  
Zhao Wenbin, 2020, arXiv
[88]  
Zintgraf LuisaM., 2017, ICLR