Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction

被引:9
作者
Mersha, Melkamu [1 ]
Lam, Khang [2 ]
Wood, Joseph [1 ]
Alshami, Ali K. [1 ]
Kalita, Jugal [1 ]
机构
[1] Univ Colorado, Coll Engn & Appl Sci, Colorado Springs, CO 80918 USA
[2] Can Tho Univ, Coll Informat & Commun Technol, Can Tho 90000, Vietnam
关键词
XAI; Explainable artificial intelligence; Interpretable deep learning; Machine learning; Neural networks; Evaluation methods; Computer vision; Natural language processing; NLP; Transformers; Time series; Healthcare; Autonomous cars; BLACK-BOX; PREDICTION MODEL; LEARNING-MODELS; AI; XAI; INTERPRETABILITY; CLASSIFICATION; EXPLANATIONS; DECISIONS; VEHICLES;
D O I
10.1016/j.neucom.2024.128111
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence models encounter significant challenges due to their black-box nature, particularly in safety-critical domains such as healthcare, finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI) addresses these challenges by providing explanations for how these models make decisions and predictions, ensuring transparency, accountability, and fairness. Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques. However, there remains a gap in the literature as there are no comprehensive reviews that delve into the detailed mathematical representations, design methodologies of XAI models, and other associated aspects. This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas. The survey is aimed at XAI researchers, XAI practitioners, AI model developers, and XAI beneficiaries who are interested in enhancing the trustworthiness, transparency, accountability, and fairness of their AI models.
引用
收藏
页数:25
相关论文
共 226 条
  • [21] Atakishiyev S, 2023, Arxiv, DOI arXiv:2111.10518
  • [22] Atakishiyev S, 2024, Arxiv, DOI [arXiv:2112.11561, 10.48550/arxiv.2112.11561, DOI 10.48550/ARXIV.2112.11561]
  • [23] Awotunde J.B., 2022, Connected e-Health: Integrated IoT and Cloud Computing, P417
  • [24] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [25] A deep learning framework for financial time series using stacked autoencoders and long-short term memory
    Bao, Wei
    Yue, Jun
    Rao, Yulei
    [J]. PLOS ONE, 2017, 12 (07):
  • [26] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [27] Autonomous Vehicles Security: Challenges and Solutions Using Blockchain and Artificial Intelligence
    Bendiab, Gueltoum
    Hameurlaine, Amina
    Germanos, Georgios
    Kolokotronis, Nicholas
    Shiaeles, Stavros
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (04) : 3614 - 3637
  • [28] Bharati S., 2023, IEEE Trans. Artif. Intell.., P1
  • [29] Bian ZX, 2019, IEEE INT C BIOINFORM, P931, DOI 10.1109/BIBM47256.2019.8983145
  • [30] Bostrom N., 2018, Artificial intelligence safety and security, P57, DOI [https://doi.org/10.1201/9781351251389, DOI 10.1201/9781351251389]