Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction

被引:9
作者
Mersha, Melkamu [1 ]
Lam, Khang [2 ]
Wood, Joseph [1 ]
Alshami, Ali K. [1 ]
Kalita, Jugal [1 ]
机构
[1] Univ Colorado, Coll Engn & Appl Sci, Colorado Springs, CO 80918 USA
[2] Can Tho Univ, Coll Informat & Commun Technol, Can Tho 90000, Vietnam
关键词
XAI; Explainable artificial intelligence; Interpretable deep learning; Machine learning; Neural networks; Evaluation methods; Computer vision; Natural language processing; NLP; Transformers; Time series; Healthcare; Autonomous cars; BLACK-BOX; PREDICTION MODEL; LEARNING-MODELS; AI; XAI; INTERPRETABILITY; CLASSIFICATION; EXPLANATIONS; DECISIONS; VEHICLES;
D O I
10.1016/j.neucom.2024.128111
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence models encounter significant challenges due to their black-box nature, particularly in safety-critical domains such as healthcare, finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI) addresses these challenges by providing explanations for how these models make decisions and predictions, ensuring transparency, accountability, and fairness. Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques. However, there remains a gap in the literature as there are no comprehensive reviews that delve into the detailed mathematical representations, design methodologies of XAI models, and other associated aspects. This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas. The survey is aimed at XAI researchers, XAI practitioners, AI model developers, and XAI beneficiaries who are interested in enhancing the trustworthiness, transparency, accountability, and fairness of their AI models.
引用
收藏
页数:25
相关论文
共 226 条
  • [1] Abnar S, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4190
  • [2] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [3] Agrawal K., 2021, Journal of Big Data, V8, P1
  • [4] EANDC: An explainable attention network based deep adaptive clustering model for mental health treatment
    Ahmed, Usman
    Srivastava, Gautam
    Yun, Unil
    Lin, Jerry Chun-Wei
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 130 : 106 - 113
  • [5] Al Shami A.K., 2022, Ph.D. thesis
  • [6] Alam M.N., 2023, Explainable AI in Healthcare: Enhancing Transparency and Trust upon Legal and Ethical Consideration
  • [7] A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion
    Albahri, A. S.
    Duhaim, Ali M.
    Fadhel, Mohammed A.
    Alnoor, Alhamzah
    Baqer, Noor S.
    Alzubaidi, Laith
    Albahri, O. S.
    Alamoodi, A. H.
    Bai, Jinshuai
    Salhi, Asma
    Santamaria, Jose
    Ouyang, Chun
    Gupta, Ashish
    Gu, Yuantong
    Deveci, Muhammet
    [J]. INFORMATION FUSION, 2023, 96 : 156 - 191
  • [8] Reinforcement Learning Interpretation Methods: A Survey
    Alharin, Alnour
    Doan, Thanh-Nam
    Sartipi, Mina
    [J]. IEEE ACCESS, 2020, 8 : 171058 - 171077
  • [9] Ali A, 2022, PR MACH LEARN RES, P435
  • [10] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
    Ali, Sajid
    Abuhmed, Tamer
    El-Sappagh, Shaker
    Muhammad, Khan
    Alonso-Moral, Jose M.
    Confalonieri, Roberto
    Guidotti, Riccardo
    Del Ser, Javier
    Diaz-Rodriguez, Natalia
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2023, 99