Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review

被引:299
作者
Antoniadi, Anna Markella [1 ,2 ]
Du, Yuhan [1 ]
Guendouz, Yasmine [3 ,4 ]
Wei, Lan [1 ]
Mazo, Claudia [1 ,5 ]
Becker, Brett A. [1 ]
Mooney, Catherine [1 ,2 ]
机构
[1] Univ Coll Dublin, UCD Sch Comp Sci, Dublin 4, Ireland
[2] Royal Coll Surgeons Ireland, FutureNeuro SFI Res Ctr, Dublin 2, Ireland
[3] Trinity Coll Dublin, Trinity Ctr Biomed Engn, Dublin 2, Ireland
[4] Trinity Coll Dublin, Dept Mech Mfg & Biomed Engn, Sch Engn, Dublin 2, Ireland
[5] CeADAR Irelands Ctr Appl AI, Dublin 4, Ireland
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 11期
基金
爱尔兰科学基金会;
关键词
artificial intelligence; explainable AI; XAI; clinical decision support systems; CDSS; medicine; machine learning; deep learning; explainability; transparency; interpretability; EXPLAINABLE ARTIFICIAL-INTELLIGENCE; BLACK-BOX; EXPLANATIONS; INTERPRETABILITY; VISUALIZATION; MEDICINE; ONTOLOGY; DISEASE; QUALITY; EXAMPLE;
D O I
10.3390/app11115088
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of Clinical Decision Support Systems (CDSSs). These systems support medical practitioners in their clinic decision-making and in the absence of explainability may lead to issues of under or over-reliance. Providing explanations for how recommendations are arrived at will allow practitioners to make more nuanced, and in some cases, life-saving decisions. The need for XAI in CDSS, and the medical field in general, is amplified by the need for ethical and fair decision-making and the fact that AI trained with historical data can be a reinforcement agent of historical actions and biases that should be uncovered. We performed a systematic literature review of work to-date in the application of XAI in CDSS. Tabular data processing XAI-enabled systems are the most common, while XAI-enabled CDSS for text analysis are the least common in literature. There is more interest in developers for the provision of local explanations, while there was almost a balance between post-hoc and ante-hoc explanations, as well as between model-specific and model-agnostic techniques. Studies reported benefits of the use of XAI such as the fact that it could enhance decision confidence for clinicians, or generate the hypothesis about causality, which ultimately leads to increased trustworthiness and acceptability of the system and potential for its incorporation in the clinical workflow. However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. We propose some guidelines for the implementation of XAI in CDSS and explore some opportunities, challenges, and future research needs.
引用
收藏
页数:23
相关论文
共 139 条
[1]  
Abbasi M., 2006, Clinical Decision Support Systems: A discussion on different methodologies used in Health Care
[2]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[3]  
Aivodji U, 2019, PR MACH LEARN RES, V97
[4]   Explainability for artificial intelligence in healthcare: a multidisciplinary perspective [J].
Amann, Julia ;
Blasimme, Alessandro ;
Vayena, Effy ;
Frey, Dietmar ;
Madai, Vince I. .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (01)
[5]   Artificial Intelligence and Machine Learning Applied at the Point of Care [J].
Angehrn, Zuzanna ;
Haldna, Liina ;
Zandvliet, Anthe S. ;
Gil Berglund, Eva ;
Zeeuw, Joost ;
Amzal, Billy ;
Cheung, S. Y. Amy ;
Polasek, Thomas M. ;
Pfister, Marc ;
Kerbusch, Thomas ;
Heckman, Niedre M. .
FRONTIERS IN PHARMACOLOGY, 2020, 11
[6]  
[Anonymous], 2019, ARXIV190211097
[7]  
[Anonymous], 2017, What do we need to build explainable AI systems for the medical domain? (Ml)
[8]   Clinical decision support system: Risk level prediction of heart disease using weighted fuzzy rules [J].
Anooj, P. K. .
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2012, 24 (01) :27-40
[9]  
Antoniadi AM, 2021, 36TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2021, P594, DOI 10.1145/3412841.3441940
[10]   Deep Learning in Biomedical Data Science [J].
Baldi, Pierre .
ANNUAL REVIEW OF BIOMEDICAL DATA SCIENCE, VOL 1, 2018, 1 :181-205