Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction

被引:26
作者
Mersha, Melkamu [1 ]
Lam, Khang [2 ]
Wood, Joseph [1 ]
Alshami, Ali K. [1 ]
Kalita, Jugal [1 ]
机构
[1] Univ Colorado, Coll Engn & Appl Sci, Colorado Springs, CO 80918 USA
[2] Can Tho Univ, Coll Informat & Commun Technol, Can Tho 90000, Vietnam
关键词
XAI; Explainable artificial intelligence; Interpretable deep learning; Machine learning; Neural networks; Evaluation methods; Computer vision; Natural language processing; NLP; Transformers; Time series; Healthcare; Autonomous cars; BLACK-BOX; PREDICTION MODEL; LEARNING-MODELS; AI; XAI; INTERPRETABILITY; CLASSIFICATION; EXPLANATIONS; TRUSTWORTHY; DECISIONS;
D O I
10.1016/j.neucom.2024.128111
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence models encounter significant challenges due to their black-box nature, particularly in safety-critical domains such as healthcare, finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI) addresses these challenges by providing explanations for how these models make decisions and predictions, ensuring transparency, accountability, and fairness. Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques. However, there remains a gap in the literature as there are no comprehensive reviews that delve into the detailed mathematical representations, design methodologies of XAI models, and other associated aspects. This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas. The survey is aimed at XAI researchers, XAI practitioners, AI model developers, and XAI beneficiaries who are interested in enhancing the trustworthiness, transparency, accountability, and fairness of their AI models.
引用
收藏
页数:25
相关论文
共 226 条
[51]  
Doshi-Velez F, 2017, Arxiv, DOI [arXiv:1702.08608, DOI 10.48550/ARXIV.1702.08608]
[52]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[53]  
Drenkow N, 2022, Arxiv, DOI arXiv:2112.00639
[54]   Explainable AI (XAI): Core Ideas, Techniques, and Solutions [J].
Dwivedi, Rudresh ;
Dave, Devam ;
Naik, Het ;
Singhal, Smiti ;
Omer, Rana ;
Patel, Pankesh ;
Qian, Bin ;
Wen, Zhenyu ;
Shah, Tejal ;
Morgan, Graham ;
Ranjan, Rajiv .
ACM COMPUTING SURVEYS, 2023, 55 (09)
[55]  
El Naqa I., 2015, WHAT IS MACHINE LEAR, P3
[56]   A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer's disease [J].
El-Sappagh, Shaker ;
Alonso, Jose M. ;
Islam, S. M. Riazul ;
Sultan, Ahmad M. ;
Kwak, Kyung Sup .
SCIENTIFIC REPORTS, 2021, 11 (01)
[57]   On Interpretability of Artificial Neural Networks: A Survey [J].
Fan, Feng-Lei ;
Xiong, Jinjun ;
Li, Mengzhou ;
Wang, Ge .
IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES, 2021, 5 (06) :741-760
[58]   Convolutional neural networks for decoding of covert attention focus and saliency maps for EEG feature visualization [J].
Farahat, Amr ;
Reichert, Christoph ;
Sweeney-Reed, Catherine M. ;
Hinrichs, Hermann .
JOURNAL OF NEURAL ENGINEERING, 2019, 16 (06)
[59]   Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences [J].
Feng, Jinchao ;
Lansford, Joshua L. ;
Katsoulakis, Markos A. ;
Vlachos, Dionisios G. .
SCIENCE ADVANCES, 2020, 6 (42)
[60]   A review of explainable and interpretable AI with applications in COVID-19 imaging [J].
Fuhrman, Jordan D. ;
Gorre, Naveena ;
Hu, Qiyuan ;
Li, Hui ;
El Naqa, Issam ;
Giger, Maryellen L. .
MEDICAL PHYSICS, 2022, 49 (01) :1-14