Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers

被引:25
作者
Palatnik de Sousa, Iam [1 ]
Vellasco, Marley M. B. R. [1 ]
Costa da Silva, Eduardo [1 ]
机构
[1] Pontif Catholic Univ Rio Janeiro, Dept Elect Engn, BR-22453900 Rio De Janeiro, Brazil
关键词
computer vision; Computerized Tomography; Covid; 19; Explainable AI; image classification; medical imaging;
D O I
10.3390/s21165657
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. Methodology: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradient approaches (Vanilla, Smooth, Integrated). Main results: Among the deep neural networks architectures evaluated for this image classification task, VGG16 was shown to be most affected by biases towards spurious artifacts, while DenseNet was notably more robust against them. Further impacts: Results further show that small differences in validation accuracies can cause drastic changes in explanation heatmaps for DenseNet architectures, indicating that small changes in validation accuracy may have large impacts on the biases learned by the networks. Notably, it is important to notice that the strong performance metrics achieved by all these networks (Accuracy, F1 score, AUC all in the 80 to 90% range) could give users the erroneous impression that there is no bias. However, the analysis of the explanation heatmaps highlights the bias.
引用
收藏
页数:14
相关论文
共 27 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]  
[Anonymous], 2017, INT C LEARN REPR WOR
[3]  
[Anonymous], 2020, ARXIV200313865
[4]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[5]  
Chan J., 2020, DLAI3 HACKATHON PHAS
[6]   Coronavirus disease (COVID-19) detection in Chest X-Ray images using majority voting based classifier ensemble [J].
Chandra, Tej Bahadur ;
Verma, Kesari ;
Singh, Bikesh Kumar ;
Jain, Deepak ;
Netam, Satyabhuwan Singh .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165 (165)
[7]  
Cohen J. P., 2020, ARXIV200611988CSEESS, V1, P18272
[8]   Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases [J].
de Sousa, Iam Palatnik ;
Bernardes Rebuzzi Vellasco, Marley Maria ;
da Silva, Eduardo Costa .
SENSORS, 2019, 19 (13)
[9]   Bias Analysis on Public X-Ray Image Datasets of Pneumonia and COVID-19 Patients [J].
Del Tejo Catala, Omar ;
Salvador Igual, Ismael ;
Javier Perez-Benito, Francisco ;
Millan Escriva, David ;
Ortiz Castello, Vicent ;
Llobet, Rafael ;
Perez-Cortes, Juan-Carlos .
IEEE ACCESS, 2021, 9 :42370-42383
[10]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848