The role of explainability and transparency in fostering trust in AI healthcare systems: a systematic literature review, open issues and potential solutions

被引:0
作者
Christopher Ifeanyi Eke [1 ]
Liyana Shuib [2 ]
机构
[1] Department of Information System, FCSIT, University of Malaya, Kuala Lumpur
[2] Department of Computer Science, Faculty of Computing, Federal University of Lafia, Nasarawa State, Lafia
关键词
Artificial intelligence; Explainability; Healthcare systems; Machine learning; Transparency; Trust;
D O I
10.1007/s00521-024-10868-x
中图分类号
学科分类号
摘要
The healthcare sector has advanced significantly as a result of the ability of artificial intelligence (AI) to solve cognitive problems that once required human intelligence. As artificial intelligence finds more applications in healthcare, trustworthiness must be guaranteed. Even while AI has the potential to improve healthcare, there are still challenging issues because it is yet to be widely adopted, especially when it comes to transparency. Concerns about comprehending the internal workings of AI models, possible biases, model robustness, and generalizability are raised by their opacity which makes them function like black boxes. A solution for worries over the transparency of AI algorithms is explainable AI. Explainable AI seeks to enhance AI explainability and analytical capabilities, particularly in vital industries like healthcare. Even though earlier research has examined several explainable AI-related topics, such as a lexicon, industry-specific overviews, and applications in the healthcare industry, a thorough analysis concentrating on the function of explainable AI in building trust in AI healthcare systems is required. In an effort to close this gap, a systematic literature review that adheres to PRISMA principles that analyze relevant papers that were published between 2015 and 2023 was done in this paper. To determine the critical role that explainable AI plays in fostering trust, this study examines widely utilized methodologies, machine learning and deep learning techniques, datasets, performance measures and validation procedures used in AI healthcare research. In addition, research issues and potential research directions are also discussed in this research. Thus, this systematic review provides a thorough summary of the present status of research on explainability and transparency in AI healthcare systems, thus illuminating crucial factors that affect user trust. The results are intended to assist researchers, policymakers and healthcare professionals in developing a more transparent, responsible and reliable AI system in the healthcare sector. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:1999 / 2034
页数:35
相关论文
共 159 条
  • [91] Grimm N., Yoo J., General relativistic effects in weak lensing angular power spectra, Phys Rev D, 104, 8, (2021)
  • [92] Abir W.H., Uddin M.F., Khanam F.R., Tazin T., Khan M.M., Masud M., Aljahdali S., Explainable AI in diagnosing and anticipating leukemia using transfer learning method, Computat Intell Neurosci, (2022)
  • [93] Porto R., Molina J.M., Berlanga A., Patricio M.A., Minimum relevant features to obtain explainable systems for predicting cardiovascular disease using the statlog data set, Appl Sci, 11, 3, (2021)
  • [94] Aghamohammadi M., Madan M., Hong J.K., Watson I., Predicting heart attack through explainable artificial intelligence, . Computational science–ICCS 2019: 19Th International Conference, Faro, Portugal, 12–14, 2019, Proceedings, Part II 19, (2019)
  • [95] Zhang Z., Citardi D., Wang D., Genc Y., Shan J., Fan X., Patients’ perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data, Health Informatics J, 27, 2, (2021)
  • [96] Katuwal G.J., Chen R., Machine Learning Model Interpretability for Precision Medicine, (2016)
  • [97] Holzinger A., Langs G., Denk H., Zatloukal K., Muller H., Causability and explainability of artificial intelligence in medicine, Wiley Interd Rev Data Min Knowl Disc, 9, 4, (2019)
  • [98] Lauterbach A., Artificial intelligence and policy: quo vadis?, Dig Policy Regul Govern, 21, 3, pp. 238-263, (2019)
  • [99] Nieto Juscafresa A., An Introduction to Explainable Artificial Intelligence with LIME and SHAP, (2022)
  • [100] Pezoulas V.C., Liontos A., Mylona E., Papaloukas C., Milionis O., Biros D., Kyriakopoulos C., Kostikas K., Milionis H., Fotiadis D.I., Predicting the need for mechanical ventilation and mortality in hospitalized COVID-19 patients who received heparin, 2022 44Th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), (2022)