Middle-Level Features for the Explanation of Classification Systems by Sparse Dictionary Methods

被引:14
作者
Apicella, A. [1 ]
Isgro, F. [1 ]
Prevete, R. [1 ]
Tamburrini, G. [1 ]
机构
[1] Univ Napoli Federico II, Dipartimento Ingn Elettr & Tecnol Informaz, I-80125 Naples, Italy
关键词
XAI and explainable artificial intelligence; machine learning; sparse coding; CONVOLUTIONAL NEURAL-NETWORKS; DAMAGE DETECTION; ALGORITHMS; PREDICTION;
D O I
10.1142/S0129065720500409
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning (ML) systems are affected by a pervasive lack of transparency. The eXplainable Artificial Intelligence (XAI) research area addresses this problem and the related issue of explaining the behavior of ML systems in terms that are understandable to human beings. In many explanation of XAI approaches, the output of ML systems are explained in terms of low-level features of their inputs. However, these approaches leave a substantive explanatory burden with human users, insofar as the latter are required to map low-level properties into more salient and readily understandable parts of the input. To alleviate this cognitive burden, an alternative model-agnostic framework is proposed here. This framework is instantiated to address explanation problems in the context of ML image classification systems, without relying on pixel relevance maps and other low-level features of the input. More specifically, one obtains sets of middle-level properties of classification inputs that are perceptually salient by applying sparse dictionary learning techniques. These middle-level properties are used as building blocks for explanations of image classifications. The achieved explanations are parsimonious, for their reliance on a limited set of middle-level image properties. And they can be contrastive, because the set of middle-level image properties can be used to explain why the system advanced the proposed classification over other antagonist classifications. In view of its model-agnostic character, the proposed framework is adaptable to a variety of other ML systems and explanation problems.
引用
收藏
页数:17
相关论文
共 86 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]   Enhanced probabilistic neural network with local decision circles: A robust classifier [J].
Ahmadlou, Mehran ;
Adeli, Hojjat .
INTEGRATED COMPUTER-AIDED ENGINEERING, 2010, 17 (03) :197-210
[3]  
Alvarez-Melis D., CORR
[4]  
Ancona M., 2018, 6 INT C LEARN REPR
[5]  
[Anonymous], P IEEE C COMP VIS PA
[6]  
[Anonymous], 2015, Understanding Neural Networks Through Deep Visualization
[7]   Neonatal Seizure Detection Using Deep Convolutional Neural Networks [J].
Ansari, Amir H. ;
Cherian, Perumpillichira J. ;
Caicedo, Alexander ;
Naulaers, Gunnar ;
De Vos, Maarten ;
Van Huffel, Sabine .
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2019, 29 (04)
[8]   Deep Neural Architectures for Mapping Scalp to Intracranial EEG [J].
Antoniades, Andreas ;
Spyrou, Loukianos ;
Martin-Lopez, David ;
Valentin, Antonio ;
Alarcon, Gonzalo ;
Sanei, Saeid ;
Took, Clive Cheong .
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2018, 28 (08)
[9]   Contrastive Explanations to Classification Systems Using Sparse Dictionaries [J].
Apicella, A. ;
Isgro, F. ;
Prevete, R. ;
Tamburrini, G. .
IMAGE ANALYSIS AND PROCESSING - ICIAP 2019, PT I, 2019, 11751 :207-218
[10]  
Apicella A., 2019, P EUR S ART NEUR NET