A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks

被引:130
|
作者
Islam, Mir Riyanul [1 ]
Ahmed, Mobyen Uddin [1 ]
Barua, Shaibal [1 ]
Begum, Shahina [1 ]
机构
[1] Malardalen Univ, Sch Innovat Design & Engn, Artificial Intelligence & Intelligent Syst Res Gr, Hogskoleplan 1, S-72220 Vasteras, Sweden
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 03期
基金
瑞典研究理事会; 欧盟地平线“2020”;
关键词
explainable artificial intelligence; explainability; evaluation metrics; systematic literature review; NEURAL-NETWORK; BLACK-BOX; MODEL; EXPLANATIONS; INFORMATION; PREDICTIONS; DECISIONS; KNOWLEDGE; CLASSIFICATION; VISUALIZATION;
D O I
10.3390/app12031353
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Artificial intelligence (AI) and machine learning (ML) have recently been radically improved and are now being employed in almost every application domain to develop automated or semi-automated systems. To facilitate greater human acceptability of these systems, explainable artificial intelligence (XAI) has experienced significant growth over the last couple of years with the development of highly accurate models but with a paucity of explainability and interpretability. The literature shows evidence from numerous studies on the philosophy and methodologies of XAI. Nonetheless, there is an evident scarcity of secondary studies in connection with the application domains and tasks, let alone review studies following prescribed guidelines, that can enable researchers' understanding of the current trends in XAI, which could lead to future research for domain- and application-specific method development. Therefore, this paper presents a systematic literature review (SLR) on the recent developments of XAI methods and evaluation metrics concerning different application domains and tasks. This study considers 137 articles published in recent years and identified through the prominent bibliographic databases. This systematic synthesis of research articles resulted in several analytical findings: XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations. Research studies have been performed on the addition of explanations to widely used AI/ML models for expert users. However, more attention is required to generate explanations for general users from sensitive domains such as finance and the judicial system.
引用
收藏
页数:38
相关论文
共 50 条
  • [1] Explainable Artificial Intelligence in Radiotherapy: A Systematic review
    Heising, Luca M.
    Wolfs, Cecile J. A.
    Jacobs, Maria J. A.
    Verhaegen, Frank
    Ou, Carol X. J.
    RADIOTHERAPY AND ONCOLOGY, 2024, 194 : S4444 - S4446
  • [2] The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review
    Ali, Subhan
    Akhlaq, Filza
    Imran, Ali Shariq
    Kastrati, Zenun
    Daudpota, Sher Muhammad
    Moosa, Muhammad
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 166
  • [3] Systematic literature review on the application of explainable artificial intelligence in palliative care studies
    Migiddorj, Battushig
    Batterham, Marijka
    Win, Khin Than
    International Journal of Medical Informatics, 2025, 200
  • [4] Explainable Artificial Intelligence in the Medical Domain: A Systematic Review
    Chakrobartty, Shuvro
    El-Gayar, Omar
    DIGITAL INNOVATION AND ENTREPRENEURSHIP (AMCIS 2021), 2021,
  • [5] Explainable artificial intelligence (XAI) in finance: a systematic literature review
    Cerneviciene, Jurgita
    Kabasinskas, Audrius
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (08)
  • [6] Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
    Giuste, Felipe
    Shi, Wenqi
    Zhu, Yuanda
    Naren, Tarun
    Isgut, Monica
    Sha, Ying
    Tong, Li
    Gupte, Mitali
    Wang, May D.
    IEEE REVIEWS IN BIOMEDICAL ENGINEERING, 2023, 16 : 5 - 21
  • [7] Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review
    Frasca M.
    La Torre D.
    Pravettoni G.
    Cutica I.
    Discov. Artif. Intell., 2024, 1 (1):
  • [8] Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
    Giuste, Felipe
    Shi, Wenqi
    Zhu, Yuanda
    Naren, Tarun
    Isgut, Monica
    Sha, Ying
    Tong, Li
    Gupte, Mitali
    Wang, May D. D.
    IEEE REVIEWS IN BIOMEDICAL ENGINEERING, 2023, 16 : 5 - 21
  • [9] Explainable artificial intelligence in skin cancer recognition: A systematic review
    Hauser, Katja
    Kurz, Alexander
    Haggenmueller, Sarah
    Maron, Roman C.
    von Kalle, Christof
    Utikal, Jochen S.
    Meier, Friedegund
    Hobelsberger, Sarah
    Gellrich, Frank F.
    Sergon, Mildred
    Hauschild, Axel
    French, Lars E.
    Heinzerling, Lucie
    Schlager, Justin G.
    Ghoreschi, Kamran
    Schlaak, Max
    Hilke, Franz J.
    Poch, Gabriela
    Kutzner, Heinz
    Berking, Carola
    Heppt, Markus, V
    Erdmann, Michael
    Haferkamp, Sebastian
    Schadendorf, Dirk
    Sondermann, Wiebke
    Goebeler, Matthias
    Schilling, Bastian
    Kather, Jakob N.
    Froehling, Stefan
    Lipka, Daniel B.
    Hekler, Achim
    Krieghoff-Henning, Eva
    Brinker, Titus J.
    EUROPEAN JOURNAL OF CANCER, 2022, 167 : 54 - 69
  • [10] Review of Explainable Artificial Intelligence
    Zhao, Yanyu
    Zhao, Xiaoyong
    Wang, Lei
    Wang, Ningning
    Computer Engineering and Applications, 2023, 59 (14) : 1 - 14