A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion

被引:194
|
作者
Albahri, A. S. [1 ]
Duhaim, Ali M. [2 ]
Fadhel, Mohammed A. [3 ]
Alnoor, Alhamzah [4 ]
Baqer, Noor S. [5 ]
Alzubaidi, Laith [6 ,7 ]
Albahri, O. S. [8 ,9 ]
Alamoodi, A. H. [10 ]
Bai, Jinshuai [6 ,7 ]
Salhi, Asma
Santamaria, Jose
Ouyang, Chun
Gupta, Ashish [6 ,7 ]
Gu, Yuantong [6 ,7 ]
Deveci, Muhammet
机构
[1] Iraqi Commiss Comp & Informat ICCI, Baghdad, Iraq
[2] Minist Educ, Nasiriyah, Iraq
[3] Univ Sumer, Coll Comp Sci & Informat Technol, Rifai, Iraq
[4] Southern Tech Univ, Basrah, Iraq
[5] Minist Educ, Baghdad, Iraq
[6] Queensland Univ Technol, Sch Mech Med & Proc Engn, Brisbane, Qld 4000, Australia
[7] Queensland Univ Technol, ARC Ind Transformat Training Ctr Joint Biomech, Brisbane, Qld 4000, Australia
[8] Mazaya Univ Coll, Comp Tech Engn Dept, Nasiriyah, Iraq
[9] La Trobe Univ, Dept Comp Sci & Informat Technol, Melbourne, Vic, Australia
[10] Univ Pendidikan Sultan Idris UPSI, Fac Comp & Meta Technol FKMT, Tanjung Malim, Perak, Malaysia
基金
澳大利亚研究理事会;
关键词
Trustworthiness; Explainability; Artificial intelligence; Healthcare; Information fusion; MONITORING-SYSTEM; BLOCKCHAIN; FRAMEWORK; AI; PREDICTION; DIAGNOSIS; NETWORKS; MEDICINE; MODELS;
D O I
10.1016/j.inffus.2023.03.008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the last few years, the trend in health care of embracing artificial intelligence (AI) has dramatically changed the medical landscape. Medical centres have adopted AI applications to increase the accuracy of disease diag-nosis and mitigate health risks. AI applications have changed rules and policies related to healthcare practice and work ethics. However, building trustworthy and explainable AI (XAI) in healthcare systems is still in its early stages. Specifically, the European Union has stated that AI must be human-centred and trustworthy, whereas in the healthcare sector, low methodological quality and high bias risk have become major concerns. This study endeavours to offer a systematic review of the trustworthiness and explainability of AI applications in healthcare, incorporating the assessment of quality, bias risk, and data fusion to supplement previous studies and provide more accurate and definitive findings. Likewise, 64 recent contributions on the trustworthiness of AI in healthcare from multiple databases (i.e., ScienceDirect, Scopus, Web of Science, and IEEE Xplore) were identified using a rigorous literature search method and selection criteria. The considered papers were categorised into a coherent and systematic classification including seven categories: explainable robotics, prediction, decision support, blockchain, transparency, digital health, and review. In this paper, we have presented a systematic and comprehensive analysis of earlier studies and opened the door to potential future studies by discussing in depth the challenges, motivations, and recommendations. In this study a systematic science mapping analysis in order to reorganise and summarise the results of earlier studies to address the issues of trustworthiness and objectivity was also performed. Moreover, this work has provided decisive evidence for the trustworthiness of AI in health care by presenting eight current state-of-the-art critical analyses regarding those more relevant research gaps. In addition, to the best of our knowledge, this study is the first to investigate the feasibility of utilising trustworthy and XAI applications in healthcare, by incorporating data fusion techniques and connecting various important pieces of information from available healthcare datasets and AI algorithms. The analysis of the revised contri-butions revealed crucial implications for academics and practitioners, and then potential methodological aspects to enhance the trustworthiness of AI applications in the medical sector were reviewed. Successively, the theo-retical concept and current use of 17 XAI methods in health care were addressed. Finally, several objectives and guidelines were provided to policymakers to establish electronic health-care systems focused on achieving relevant features such as legitimacy, morality, and robustness. Several types of information fusion in healthcare were focused on in this study, including data, feature, image, decision, multimodal, hybrid, and temporal.
引用
收藏
页码:156 / 191
页数:36
相关论文
共 50 条
  • [11] Explainable Artificial Intelligence in the Medical Domain: A Systematic Review
    Chakrobartty, Shuvro
    El-Gayar, Omar
    DIGITAL INNOVATION AND ENTREPRENEURSHIP (AMCIS 2021), 2021,
  • [12] A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
    Bharati S.
    Mondal M.R.H.
    Podder P.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1429 - 1442
  • [13] Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review
    Jung, Jinsun
    Lee, Hyungbok
    Jung, Hyunggu
    Kim, Hyeoneui
    HELIYON, 2023, 9 (05)
  • [14] Artificial intelligence and multimodal data fusion for smart healthcare: topic modeling and bibliometrics
    Chen, Xieling
    Xie, Haoran
    Tao, Xiaohui
    Wang, Fu Lee
    Leng, Mingming
    Lei, Baiying
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (04)
  • [15] A systematic review of trustworthy artificial intelligence applications in natural disasters
    Albahri, A. S.
    Khaleel, Yahya Layth
    Habeeb, Mustafa Abdulfattah
    Ismael, Reem D.
    Hameed, Qabas A.
    Deveci, Muhammet
    Homod, Raad Z.
    Albahri, O. S.
    Alamoodi, A. H.
    Alzubaidi, Laith
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 118
  • [16] Explainable artificial intelligence in skin cancer recognition: A systematic review
    Hauser, Katja
    Kurz, Alexander
    Haggenmueller, Sarah
    Maron, Roman C.
    von Kalle, Christof
    Utikal, Jochen S.
    Meier, Friedegund
    Hobelsberger, Sarah
    Gellrich, Frank F.
    Sergon, Mildred
    Hauschild, Axel
    French, Lars E.
    Heinzerling, Lucie
    Schlager, Justin G.
    Ghoreschi, Kamran
    Schlaak, Max
    Hilke, Franz J.
    Poch, Gabriela
    Kutzner, Heinz
    Berking, Carola
    Heppt, Markus, V
    Erdmann, Michael
    Haferkamp, Sebastian
    Schadendorf, Dirk
    Sondermann, Wiebke
    Goebeler, Matthias
    Schilling, Bastian
    Kather, Jakob N.
    Froehling, Stefan
    Lipka, Daniel B.
    Hekler, Achim
    Krieghoff-Henning, Eva
    Brinker, Titus J.
    EUROPEAN JOURNAL OF CANCER, 2022, 167 : 54 - 69
  • [17] Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
    Giuste, Felipe
    Shi, Wenqi
    Zhu, Yuanda
    Naren, Tarun
    Isgut, Monica
    Sha, Ying
    Tong, Li
    Gupte, Mitali
    Wang, May D.
    IEEE REVIEWS IN BIOMEDICAL ENGINEERING, 2023, 16 : 5 - 21
  • [18] Artificial intelligence in healthcare institutions: A systematic literature review on influencing factors
    Roppelt, Julia Stefanie
    Kanbach, Dominik K.
    Kraus, Sascha
    TECHNOLOGY IN SOCIETY, 2024, 76
  • [19] Artificial intelligence for healthcare and medical education: a systematic review
    Sun, Li
    Yin, Changhao
    Xu, Qiuling
    Zhao, Weina
    AMERICAN JOURNAL OF TRANSLATIONAL RESEARCH, 2023, 15 (07): : 4820 - 4828
  • [20] Physiological signal analysis using explainable artificial intelligence: A systematic review
    Shen, Jian
    Wu, Jinwen
    Liang, Huajian
    Zhao, Zeguang
    Li, Kunlin
    Zhu, Kexin
    Wang, Kang
    Ma, Yu
    Hu, Wenbo
    Guo, Chenxu
    Zhang, Yanan
    Hu, Bin
    NEUROCOMPUTING, 2025, 618