Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction

被引:26
作者
Mersha, Melkamu [1 ]
Lam, Khang [2 ]
Wood, Joseph [1 ]
Alshami, Ali K. [1 ]
Kalita, Jugal [1 ]
机构
[1] Univ Colorado, Coll Engn & Appl Sci, Colorado Springs, CO 80918 USA
[2] Can Tho Univ, Coll Informat & Commun Technol, Can Tho 90000, Vietnam
关键词
XAI; Explainable artificial intelligence; Interpretable deep learning; Machine learning; Neural networks; Evaluation methods; Computer vision; Natural language processing; NLP; Transformers; Time series; Healthcare; Autonomous cars; BLACK-BOX; PREDICTION MODEL; LEARNING-MODELS; AI; XAI; INTERPRETABILITY; CLASSIFICATION; EXPLANATIONS; TRUSTWORTHY; DECISIONS;
D O I
10.1016/j.neucom.2024.128111
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence models encounter significant challenges due to their black-box nature, particularly in safety-critical domains such as healthcare, finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI) addresses these challenges by providing explanations for how these models make decisions and predictions, ensuring transparency, accountability, and fairness. Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques. However, there remains a gap in the literature as there are no comprehensive reviews that delve into the detailed mathematical representations, design methodologies of XAI models, and other associated aspects. This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas. The survey is aimed at XAI researchers, XAI practitioners, AI model developers, and XAI beneficiaries who are interested in enhancing the trustworthiness, transparency, accountability, and fairness of their AI models.
引用
收藏
页数:25
相关论文
共 226 条
[91]  
Kim J., 2020, P IEEECVF C COMPUTER, P9661
[92]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[93]  
Labrin C., 2020, R for Political Data Science, P375, DOI DOI 10.1201/9781003010623-15
[94]  
Lai V., 2021, arXiv, DOI DOI 10.48550/ARXIV.2112.11471
[95]   Interpretable Decision Sets: A Joint Framework for Description and Prediction [J].
Lakkaraju, Himabindu ;
Bach, Stephen H. ;
Leskovec, Jure .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :1675-1684
[96]   What do we want from Explainable Artificial Intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research [J].
Langer, Markus ;
Oster, Daniel ;
Speith, Timo ;
Hermanns, Holger ;
Kaestner, Lena ;
Schmidt, Eva ;
Sesing, Andreas ;
Baum, Kevin .
ARTIFICIAL INTELLIGENCE, 2021, 296
[97]  
Lanham T, 2023, Arxiv, DOI arXiv:2307.13702
[98]   A comparison of explainable artificial intelligence methods in the phase classification of multi-principal element alloys [J].
Lee, Kyungtae ;
Ayyasamy, Mukil V. ;
Ji, Yangfeng ;
Balachandran, Prasanna V. .
SCIENTIFIC REPORTS, 2022, 12 (01)
[99]   Intelligent Fault Diagnosis of an Aircraft Fuel System Using Machine Learning-A Literature Review [J].
Li, Jiajin ;
King, Steve ;
Jennions, Ian .
MACHINES, 2023, 11 (04)
[100]   A Large-Scale Database and a CNN Model for Attention-Based Glaucoma Detection [J].
Li, Liu ;
Xu, Mai ;
Liu, Hanruo ;
Li, Yang ;
Wang, Xiaofei ;
Jiang, Lai ;
Wang, Zulin ;
Fan, Xiang ;
Wang, Ningli .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (02) :413-424