Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction

被引:26
作者
Mersha, Melkamu [1 ]
Lam, Khang [2 ]
Wood, Joseph [1 ]
Alshami, Ali K. [1 ]
Kalita, Jugal [1 ]
机构
[1] Univ Colorado, Coll Engn & Appl Sci, Colorado Springs, CO 80918 USA
[2] Can Tho Univ, Coll Informat & Commun Technol, Can Tho 90000, Vietnam
关键词
XAI; Explainable artificial intelligence; Interpretable deep learning; Machine learning; Neural networks; Evaluation methods; Computer vision; Natural language processing; NLP; Transformers; Time series; Healthcare; Autonomous cars; BLACK-BOX; PREDICTION MODEL; LEARNING-MODELS; AI; XAI; INTERPRETABILITY; CLASSIFICATION; EXPLANATIONS; TRUSTWORTHY; DECISIONS;
D O I
10.1016/j.neucom.2024.128111
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence models encounter significant challenges due to their black-box nature, particularly in safety-critical domains such as healthcare, finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI) addresses these challenges by providing explanations for how these models make decisions and predictions, ensuring transparency, accountability, and fairness. Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques. However, there remains a gap in the literature as there are no comprehensive reviews that delve into the detailed mathematical representations, design methodologies of XAI models, and other associated aspects. This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas. The survey is aimed at XAI researchers, XAI practitioners, AI model developers, and XAI beneficiaries who are interested in enhancing the trustworthiness, transparency, accountability, and fairness of their AI models.
引用
收藏
页数:25
相关论文
共 226 条
[21]  
Atakishiyev S, 2023, Arxiv, DOI arXiv:2111.10518
[22]  
Atakishiyev S, 2024, Arxiv, DOI [arXiv:2112.11561, DOI 10.48550/ARXIV.2112.11561]
[23]  
Awotunde JB, 2022, Connected e-Health: Integrated IoT and Cloud Computing, P417
[24]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[25]   A deep learning framework for financial time series using stacked autoencoders and long-short term memory [J].
Bao, Wei ;
Yue, Jun ;
Rao, Yulei .
PLOS ONE, 2017, 12 (07)
[26]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[27]   Autonomous Vehicles Security: Challenges and Solutions Using Blockchain and Artificial Intelligence [J].
Bendiab, Gueltoum ;
Hameurlaine, Amina ;
Germanos, Georgios ;
Kolokotronis, Nicholas ;
Shiaeles, Stavros .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (04) :3614-3637
[28]  
Bharati Subrato., 2023, IEEE Trans. Artif. Intell..
[29]  
Bian ZX, 2019, IEEE INT C BIOINFORM, P931, DOI 10.1109/BIBM47256.2019.8983145
[30]  
Bostrom N., 2018, Artificial Intelligence Safety and Security, P57, DOI DOI 10.1201/9781351251389-4