An analysis of explainability methods for convolutional neural networks

被引:46
作者
Vonder Haar, Lynn [1 ]
Elvira, Timothy [1 ]
Ochoa, Omar [1 ]
机构
[1] Embry Riddle Aeronaut Univ, Dept Elect Engn & Comp Sci, 1 Aerosp Blvd, Daytona Beach, FL 32114 USA
关键词
Explainability; Black box model; Convolutional neural network; Image recognition; High-risk fields; Safety-critical fields;
D O I
10.1016/j.engappai.2022.105606
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models have gained a reputation of high accuracy in many domains. Convolutional Neural Networks (CNN) are specialized towards image recognition and have high accuracy in classifying objects within images. However, CNNs are an example of a black box model, meaning that experts are unsure how they work internally to reach a classification decision. Without knowing the reasoning behind a decision, there is low confidence that CNNs will continue to make accurate decisions, so it is unsafe to use them in high-risk or safety-critical fields without first developing methods to explain their decisions. This paper is a survey and analysis of the available explainability methods for showing the reasoning behind CNN decisions.
引用
收藏
页数:22
相关论文
共 88 条
[1]  
Abdel-Hamid O, 2013, INTERSPEECH, P3365
[2]  
Albawi S, 2017, I C ENG TECHNOL
[3]   Survey and critique of techniques for extracting rules from trained artificial neural networks [J].
Andrews, R ;
Diederich, J ;
Tickle, AB .
KNOWLEDGE-BASED SYSTEMS, 1995, 8 (06) :373-389
[4]  
[Anonymous], 2014, EUR C COMP VIS ZUR S
[5]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[6]  
Baehrens D, 2010, J MACH LEARN RES, V11, P1803
[7]  
Bazen S., 2013, Tech. Rep. halshs-00828790
[8]   Machine Learning Explainability Through Comprehensible Decision Trees [J].
Blanco-Justicia, Alberto ;
Domingo-Ferrer, Josep .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2019, 2019, 11713 :15-26
[9]   A Simple Convolutional Neural Network with Rule Extraction [J].
Bologna, Guido .
APPLIED SCIENCES-BASEL, 2019, 9 (12)
[10]   An approach to explainable deep learning using fuzzy inference [J].
Bonanno, David ;
Nock, Kristen ;
Smith, Leslie ;
Elmore, Paul ;
Petry, Fred .
NEXT-GENERATION ANALYST V, 2017, 10207