A review of explainable and interpretable AI with applications in COVID-19 imaging

被引:72
作者
Fuhrman, Jordan D. [1 ,2 ]
Gorre, Naveena [1 ,3 ]
Hu, Qiyuan [1 ,2 ]
Li, Hui [1 ,2 ]
El Naqa, Issam [1 ,3 ]
Giger, Maryellen L. [1 ,2 ]
机构
[1] Univ Chicago, Med Imaging & Data Resource Ctr MIDRC, Chicago, IL 60637 USA
[2] Univ Chicago, Dept Radiol, Mailcode 2026,5841 S Maryland Ave, Chicago, IL 60637 USA
[3] H Lee Moffitt Canc Ctr & Res Inst, Dept Machine Learning, Tampa, FL USA
关键词
AI; COVID-19; deep learning; explainability; interpretability; COMPUTER-AIDED DIAGNOSIS; ARTIFICIAL-INTELLIGENCE; BLACK-BOX; MACHINE; FEATURES;
D O I
10.1002/mp.15359
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
The development of medical imaging artificial intelligence (AI) systems for evaluating COVID-19 patients has demonstrated potential for improving clinical decision making and assessing patient outcomes during the recent COVID-19 pandemic. These have been applied to many medical imaging tasks, including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life-or-death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to COVID-19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID-19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID-19 to quickly understand the basics of several explainable AI techniques and assist in the selection of an approach that is both appropriate and effective for a given scenario.
引用
收藏
页码:1 / 14
页数:14
相关论文
共 93 条
[31]  
Hase P., 2020, P 58 ANN M ASS COMPU, P5540
[32]  
He KM, 2020, IEEE T PATTERN ANAL, V42, P386, DOI [10.1109/ICCV.2017.322, 10.1109/TPAMI.2018.2844175]
[33]  
Hiesinger, 2021, ARXIV210301938CSEESS
[34]  
Holzinger A., 2017, ARXIV171209923 CS ST
[35]   Handcrafted versus deep learning radiomics for prediction of cancer therapy response [J].
Hosny, Ahmed ;
Aerts, Hugo J. ;
Mak, Raymond H. .
LANCET DIGITAL HEALTH, 2019, 1 (03) :E106-E107
[36]   Role of standard and soft tissue chest radiography images in COVID-19 diagnosis using deep learning [J].
Hu, Qiyuan ;
Drukker, Karen ;
Giger, Maryellen L. .
MEDICAL IMAGING 2021: COMPUTER-AIDED DIAGNOSIS, 2021, 11597
[37]  
Jetley S., 2018, INT C LEARNING REPRE, P256
[38]  
Jin C., 2020, DEV EVALUATION AI SY, DOI [10.1101/2020.03.20.20039834, DOI 10.1101/2020.03.20.20039834]
[39]   Image based computer aided diagnosis system for cancer detection [J].
Lee, Howard ;
Chen, Yi-Ping Phoebe .
EXPERT SYSTEMS WITH APPLICATIONS, 2015, 42 (12) :5356-5365
[40]   Machine Learning in Genomic Medicine: A Review of Computational Problems and Data Sets [J].
Leung, Michael K. K. ;
Delong, Andrew ;
Alipanahi, Babak ;
Frey, Brendan J. .
PROCEEDINGS OF THE IEEE, 2016, 104 (01) :176-197