Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond

被引:407
作者
Yang, Guang [1 ,2 ,3 ]
Ye, Qinghao [4 ,5 ]
Xia, Jun [6 ]
机构
[1] Imperial Coll London, Natl Heart & Lung Inst, London, England
[2] Royal Brompton Hosp, London, England
[3] Imperial Inst Adv Technol, Hangzhou, Peoples R China
[4] Hangzhou Oceans Smart Boya Co Ltd, Hangzhou, Peoples R China
[5] Univ Calif San Diego, La Jolla, CA 92093 USA
[6] Shenzhen Second Peoples Hosp, Radiol Dept, Shenzhen, Peoples R China
基金
英国科研创新办公室; 欧洲研究理事会; 欧盟地平线“2020”;
关键词
Explainable AI; Information fusion; Multi-domain information fusion; Weakly supervised learning; Medical image analysis; ARTIFICIAL-INTELLIGENCE; DIAGNOSTIC ERRORS; CARE; SYSTEM; FUTURE; PERFORMANCE; PREDICTION; PROGNOSIS; COVID-19;
D O I
10.1016/j.inffus.2021.07.016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.
引用
收藏
页码:29 / 52
页数:24
相关论文
共 181 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
Amodei D, 2016, PR MACH LEARN RES, V48
[3]  
[Anonymous], 2020, RADIOLOGY, DOI DOI 10.1148/RADIOL.2020200905
[4]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[5]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[6]   DXPLAIN - AN EVOLVING DIAGNOSTIC DECISION-SUPPORT SYSTEM [J].
BARNETT, GO ;
CIMINO, JJ ;
HUPP, JA ;
HOFFER, EP .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 1987, 258 (01) :67-74
[7]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[8]  
Bekker AJ, 2016, INT CONF ACOUST SPEE, P2682, DOI 10.1109/ICASSP.2016.7472164
[9]  
Belkin M, 2002, ADV NEUR IN, V14, P585
[10]   Discovering the Type 2 Diabetes in Electronic Health Records Using the Sparse Balanced Support Vector Machine [J].
Bernardini, Michele ;
Romeo, Luca ;
Misericordia, Paolo ;
Frontoni, Emanuele .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2020, 24 (01) :235-246