Explainable artificial intelligence (XAI) in radiology and nuclear medicine: a literature review

被引:25
作者
de Vries, Bart M. [1 ]
Zwezerijnen, Gerben J. C. [1 ]
Burchell, George L. [2 ]
van Velden, Floris H. P. [3 ]
van Oordt, Catharina Willemien Menke-van der Houven [4 ]
Boellaard, Ronald [1 ]
机构
[1] Vrije Univ Amsterdam, Canc Ctr Amsterdam, Dept Radiol & Nucl Med, Amsterdam UMC, Amsterdam, Netherlands
[2] Vrije Univ Amsterdam, Med Lib, Amsterdam, Netherlands
[3] Leiden Univ, Dept Radiol, Med Ctr, Leiden, Netherlands
[4] Vrije Univ Amsterdam, Canc Ctr Amsterdam, Dept Oncol, Amsterdam UMC, Amsterdam, Netherlands
关键词
deep learning; explainable artificial intelligence; magnetic resonance (MR) imaging; computed tomography (CT) imaging; positron emission tomography (PET) imaging; NEURAL-NETWORKS; DIAGNOSIS; CLASSIFICATION; SYSTEM; MRI;
D O I
10.3389/fmed.2023.1180773
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Rational: Deep learning (DL) has demonstrated a remarkable performance in diagnostic imaging for various diseases and modalities and therefore has a high potential to be used as a clinical tool. However, current practice shows low deployment of these algorithms in clinical practice, because DL algorithms lack transparency and trust due to their underlying black-box mechanism. For successful employment, explainable artificial intelligence (XAI) could be introduced to close the gap between the medical professionals and the DL algorithms. In this literature review, XAI methods available for magnetic resonance (MR), computed tomography (CT), and positron emission tomography (PET) imaging are discussed and future suggestions are made.Methods: PubMed, and Clarivate Analytics/Web of Science Core Collection were screened. Articles were considered eligible for inclusion if XAI was used (and well described) to describe the behavior of a DL model used in MR, CT and PET imaging.Results: A total of 75 articles were included of which 54 and 17 articles described post and ad hoc XAI methods, respectively, and 4 articles described both XAI methods. Major variations in performance is seen between the methods. Overall, post hoc XAI lacks the ability to provide class-discriminative and target-specific explanation. Ad hoc XAI seems to tackle this because of its intrinsic ability to explain. However, quality control of the XAI methods is rarely applied and therefore systematic comparison between the methods is difficult.Conclusion: There is currently no clear consensus on how XAI should be deployed in order to close the gap between medical professionals and DL algorithms for clinical implementation. We advocate for systematic technical and clinical quality assessment of XAI methods. Also, to ensure end-to-end unbiased and safe integration of XAI in clinical workflow, (anatomical) data minimization and quality control methods should be included.
引用
收藏
页数:14
相关论文
共 96 条
[41]   Lung nodule malignancy classification with weakly supervised explanation generation [J].
Joshi, Aniket ;
Sivaswamy, Jayanthi ;
Joshi, Gopal Datt .
JOURNAL OF MEDICAL IMAGING, 2021, 8 (04)
[42]   DeepKneeExplainer: Explainable Knee Osteoarthritis Diagnosis From Radiographs and Magnetic Resonance Imaging [J].
Karim, Md. Rezaul ;
Jiao, Jiao ;
Doehmen, Till ;
Cochez, Michael ;
Beyan, Oya ;
Rebholz-Schuhmann, Dietrich ;
Decker, Stefan .
IEEE ACCESS, 2021, 9 :39757-39780
[43]   Cerebral hemorrhage detection and localization with medical imaging for cerebrovascular disease diagnosis and treatment using explainable deep learning [J].
Kim, Kwang Hyeon ;
Koo, Hae-Won ;
Lee, Byung-Jou ;
Yoon, Sang-Won ;
Sohn, Moon-Jun .
JOURNAL OF THE KOREAN PHYSICAL SOCIETY, 2021, 79 (03) :321-327
[44]   Doctor's Dilemma: Evaluating an Explainable Subtractive Spatial Lightweight Convolutional Neural Network for Brain Tumor Diagnosis [J].
Kumar, Ambeshwar ;
Manikandan, Ramachandran ;
Kose, Utku ;
Gupta, Deepak ;
Satapathy, Suresh C. .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (03)
[45]   Deep transfer learning based classification model for covid-19 using chest CT-scans [J].
Lahsaini, Ilyas ;
El Habib Daho, Mostafa ;
Chikh, Mohamed Amine .
PATTERN RECOGNITION LETTERS, 2021, 152 :1-7
[46]   MultiR-Net: A Novel Joint Learning Network for COVID-19 segmentation and classification [J].
Li, Cheng-Fan ;
Xu, Yi-Duo ;
Ding, Xue-Hai ;
Zhao, Jun-Juan ;
Du, Rui-Qi ;
Wu, Li-Zhong ;
Sun, Wen-Ping .
COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 144
[47]   Explainable multi-instance and multi-task learning for COVID-19 diagnosis and lesion segmentation in CT images [J].
Li, Minglei ;
Li, Xiang ;
Jiang, Yuchen ;
Zhang, Jiusi ;
Luo, Hao ;
Yin, Shen .
KNOWLEDGE-BASED SYSTEMS, 2022, 252
[48]   Inverted papilloma and nasal polyp classification using a deep convolutional network integrated with an attention mechanism [J].
Li, Xinyao ;
Zhao, Haoran ;
Ren, Tao ;
Tian, Yicong ;
Yan, Aihui ;
Li, Wei .
COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 149
[49]   SSPNet: An interpretable 3D-CNN for classification of schizophrenia using phase maps of resting-state complex-valued fMRI data [J].
Lin, Qiu-Hua ;
Niu, Yan-Wei ;
Sui, Jing ;
Zhao, Wen-Da ;
Zhuo, Chuanjun ;
Calhoun, Vince D. .
MEDICAL IMAGE ANALYSIS, 2022, 79
[50]   Investigation of Deep-Learning-Driven Identification of Multiple Sclerosis Patients Based on Susceptibility-Weighted Images Using Relevance Analysis [J].
Lopatina, Alina ;
Ropele, Stefan ;
Sibgatulin, Renat ;
Reichenbach, Juergen R. ;
Gullmar, Daniel .
FRONTIERS IN NEUROSCIENCE, 2020, 14