The role of explainability and transparency in fostering trust in AI healthcare systems: a systematic literature review, open issues and potential solutions

被引:0
作者
Christopher Ifeanyi Eke [1 ]
Liyana Shuib [2 ]
机构
[1] Department of Information System, FCSIT, University of Malaya, Kuala Lumpur
[2] Department of Computer Science, Faculty of Computing, Federal University of Lafia, Nasarawa State, Lafia
关键词
Artificial intelligence; Explainability; Healthcare systems; Machine learning; Transparency; Trust;
D O I
10.1007/s00521-024-10868-x
中图分类号
学科分类号
摘要
The healthcare sector has advanced significantly as a result of the ability of artificial intelligence (AI) to solve cognitive problems that once required human intelligence. As artificial intelligence finds more applications in healthcare, trustworthiness must be guaranteed. Even while AI has the potential to improve healthcare, there are still challenging issues because it is yet to be widely adopted, especially when it comes to transparency. Concerns about comprehending the internal workings of AI models, possible biases, model robustness, and generalizability are raised by their opacity which makes them function like black boxes. A solution for worries over the transparency of AI algorithms is explainable AI. Explainable AI seeks to enhance AI explainability and analytical capabilities, particularly in vital industries like healthcare. Even though earlier research has examined several explainable AI-related topics, such as a lexicon, industry-specific overviews, and applications in the healthcare industry, a thorough analysis concentrating on the function of explainable AI in building trust in AI healthcare systems is required. In an effort to close this gap, a systematic literature review that adheres to PRISMA principles that analyze relevant papers that were published between 2015 and 2023 was done in this paper. To determine the critical role that explainable AI plays in fostering trust, this study examines widely utilized methodologies, machine learning and deep learning techniques, datasets, performance measures and validation procedures used in AI healthcare research. In addition, research issues and potential research directions are also discussed in this research. Thus, this systematic review provides a thorough summary of the present status of research on explainability and transparency in AI healthcare systems, thus illuminating crucial factors that affect user trust. The results are intended to assist researchers, policymakers and healthcare professionals in developing a more transparent, responsible and reliable AI system in the healthcare sector. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:1999 / 2034
页数:35
相关论文
共 159 条
  • [1] Kumar P., Chauhan S., Awasthi L.K., Artificial intelligence in healthcare: review, ethics, trust challenges & future research directions, Eng Appl Artif Intell, 120, (2023)
  • [2] Kumar S., Abdelhamid A.A., Tarek Z., Visualizing the unseen: exploring GRAD-CAM for interpreting convolutional image classifiers, J Artif Intell Metaheur, 4, 1, pp. 34-42, (2023)
  • [3] Worldwide spending on artificial intelligence systems will be nearly $98 billion in 2023, according to new IDC spending guide, (2019)
  • [4] Rajkomar A., Oren E., Chen K., Dai A.M., Hajaj N., Hardt M., Liu P.J., Liu X., Marcus J., Sun M., Scalable and accurate deep learning with electronic health records, NPJ Dig Med, 1, 1, (2018)
  • [5] Tonekaboni S., Joshi S., McCradden M.D., Goldenberg A., What clinicians want: Contextualizing explainable machine learning for clinical end use, Machine Learning for Healthcare Conference, (2019)
  • [6] He J., Baxter S.L., Xu J., Xu J., Zhou X., Zhang K., The practical implementation of artificial intelligence technologies in medicine, Nat Med, 25, 1, pp. 30-36, (2019)
  • [7] Topol E.J., High-performance medicine: the convergence of human and artificial intelligence, Nat Med, 25, 1, pp. 44-56, (2019)
  • [8] Ahmad M.A., Eckert C., Teredesai A., Interpretable machine learning in healthcare, In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, (2018)
  • [9] Eke C.I., Norman A.A., Shuib L.J.I.A., Context-based feature technique for sarcasm identification in benchmark datasets using deep learning and BERT model, IEEE Access, 9, pp. 48501-48518, (2021)
  • [10] Salehinejad H., Sankar S., Barfett J., Colak E., Valaee S., Recent advances in recurrent neural networks., (2017)