Beyond explaining: Opportunities and challenges of XAI-based model improvement

被引:46
作者
Weber, Leander [1 ]
Lapuschkin, Sebastian [1 ]
Binder, Alexander [2 ,3 ]
Samek, Wojciech [1 ,4 ,5 ]
机构
[1] Fraunhofer Heinrich Hertz Inst, Dept Artificial Intelligence, D-10587 Berlin, Germany
[2] Singapore Inst Technol, ICT Cluster, Singapore 138683, Singapore
[3] Univ Oslo, Dept Informat, N-0373 Oslo, Norway
[4] Tech Univ Berlin, Dept Elect Engn & Comp Sci, D-10587 Berlin, Germany
[5] BIFOLD Berlin Inst Fdn Learning & Data, D-10587 Berlin, Germany
关键词
Deep neural networks; Explainable artificial intelligence; Model improvement; Artificial intelligence; DEEP NEURAL-NETWORKS; BLACK-BOX; DECISIONS;
D O I
10.1016/j.inffus.2022.11.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex and opaque machine learning (ML) models. Despite the development of a multitude of methods to explain the decisions of black-box classifiers in recent years, these tools are seldomly used beyond visualization purposes. Only recently, researchers have started to employ explanations in practice to actually improve models. This paper offers a comprehensive overview over techniques that apply XAI practically to obtain better ML models, and systematically categorizes these approaches, comparing their respective strengths and weaknesses. We provide a theoretical perspective on these methods, and show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning, among others. We further discuss potential caveats and drawbacks of these methods. We conclude that while model improvement based on XAI can have significant beneficial effects even on complex and not easily quantifiable model properties, these methods need to be applied carefully, since their success can vary depending on a number of factors, such as the model and dataset used, or the employed explanation method.
引用
收藏
页码:154 / 176
页数:23
相关论文
共 135 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Agarwal Rishabh, 2021, Advances in Neural Information Processing Systems, V34
  • [3] A smart healthcare monitoring system for heart disease prediction based on ensemble deep learning and feature fusion
    Ali, Farman
    El-Sappagh, Shaker
    Islam, S. M. Riazul
    Kwak, Daehan
    Ali, Amjad
    Imran, Muhammad
    Kwak, Kyung-Sup
    [J]. INFORMATION FUSION, 2020, 63 : 208 - 222
  • [4] Allen-Zhu Z, 2019, ADV NEUR IN, V32
  • [5] Anders C., 2020, P MACHINE LEARNING R, P314
  • [6] Anders CJ, 2023, Arxiv, DOI arXiv:2106.13200
  • [7] Finding and removing Clever Hans: Using explanation methods to debug and improve deep models
    Anders, Christopher J.
    Weber, Leander
    Neumann, David
    Samek, Wojciech
    Mueller, Klaus-Robert
    Lapuschkin, Sebastian
    [J]. INFORMATION FUSION, 2022, 77 : 261 - 295
  • [8] [Anonymous], 2014, PROC 2 INT C LEARN R
  • [9] Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review
    Antoniadi, Anna Markella
    Du, Yuhan
    Guendouz, Yasmine
    Wei, Lan
    Mazo, Claudia
    Becker, Brett A.
    Mooney, Catherine
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (11):
  • [10] Prediction and analysis of COVID-19 positive cases using deep learning models: A descriptive case study of India
    Arora, Parul
    Kumar, Himanshu
    Panigrahi, Bijaya Ketan
    [J]. CHAOS SOLITONS & FRACTALS, 2020, 139