Explainable Deep Learning Models in Medical Image Analysis

被引:399
作者
Singh, Amitojdeep [1 ,2 ]
Sengupta, Sourya [1 ,2 ]
Lakshminarayanan, Vasudevan [1 ,2 ]
机构
[1] Univ Waterloo, Sch Optometry & Vis Sci, Theoret & Expt Epistemol Lab, Waterloo, ON N2L 3G1, Canada
[2] Univ Waterloo, Dept Syst Design Engn, Waterloo, ON N2L 3G1, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
explainability; explainable AI; XAI; deep learning; medical imaging; diagnosis; CLASSIFICATION;
D O I
10.3390/jimaging6060052
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those. However, the black-box nature of the algorithms has restricted their clinical use. Recent explainability studies aim to show the features that influence the decision of a model the most. The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations. A review of the current applications of explainable deep learning for different medical imaging tasks is presented here. The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.
引用
收藏
页数:19
相关论文
共 73 条
[1]   Uncertainty-aware performance assessment of optical imaging modalities with invertible neural networks [J].
Adler, Tim J. ;
Ardizzone, Lynton ;
Vemuri, Anant ;
Ayala, Leonardo ;
Groehl, Janek ;
Kirchner, Thomas ;
Wirkert, Sebastian ;
Kruse, Jakob ;
Rother, Carsten ;
Koethe, Ullrich ;
Maier-Hein, Lena .
INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2019, 14 (06) :997-1007
[2]  
Alber M, 2019, J MACH LEARN RES, V20
[3]   Agreement among ophthalmologists in marking the optic disc and optic cup in fundus images [J].
Almazroa, Ahmed ;
Alodhayb, Sami ;
Osman, Essameldin ;
Ramadan, Eslam ;
Hummadi, Mohammed ;
Dlaim, Mohammed ;
Alkatee, Muhannad ;
Raahemifar, Kaamran ;
Lakshminarayanan, Vasudevan .
INTERNATIONAL OPHTHALMOLOGY, 2017, 37 (03) :701-717
[4]  
Ancona M., 2017, Towards better understanding of gradient-based attribution methods for Deep Neural Networks
[5]  
[Anonymous], 2014, PREPRINT
[6]   Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration [J].
Arbabshirani, Mohammad R. ;
Fornwalt, Brandon K. ;
Mongelluzzo, Gino J. ;
Suever, Jonathan D. ;
Geise, Brandon D. ;
Patel, Aalpen A. ;
Moore, Gregory J. .
NPJ DIGITAL MEDICINE, 2018, 1
[7]  
Arya V, 2019, One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
[8]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[9]   Classification of brain lesions from MRI images using a novel neural network [J].
Bamba, Udbhav ;
Pandey, Deepanshu ;
Lakshminarayanan, Vasudevan .
MULTIMODAL BIOMEDICAL IMAGING XV, 2020, 11232
[10]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115