Leveraging explanations in interactive machine learning: An overview

被引:13
作者
Teso, Stefano [1 ]
Alkan, Oznur [2 ]
Stammer, Wolfgang [3 ]
Daly, Elizabeth [4 ]
机构
[1] Univ Trento, CIMeC & DISI, Trento, Italy
[2] Optum, Dublin, Ireland
[3] Tech Univ Darmstadt, Dept Comp Sci, Machine Learning Grp, Darmstadt, Germany
[4] IBM Res, Dublin, Ireland
来源
FRONTIERS IN ARTIFICIAL INTELLIGENCE | 2023年 / 6卷
关键词
human-in-the-loop; explainable AI; interactive machine learning; model debugging; model editing; BLACK-BOX; MODELS; TRUST; INTERPRETABILITY; CLASSIFICATION; SELECTION; HUMANS; AI;
D O I
10.3389/frai.2023.1066049
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones. To this end, we draw a conceptual map of the state-of-the-art, grouping relevant approaches based on their intended purpose and on how they structure the interaction, highlighting similarities and differences between them. We also discuss open research issues and outline possible directions forward, with the hope of spurring further research on this blooming research topic.
引用
收藏
页数:19
相关论文
共 203 条
  • [11] Anders CJ, 2020, PR MACH LEARN RES, V119
  • [12] Angelino E, 2017, KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, P35, DOI [arXiv:1704.01701, 10.1145/3097983.3098047]
  • [13] Fairness and Explanation in AI-Informed Decision Making
    Angerschmid, Alessa
    Zhou, Jianlong
    Theuermann, Kevin
    Chen, Fang
    Holzinger, Andreas
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2022, 4 (02): : 556 - 579
  • [14] [Anonymous], 2018, Advances in Neural Information Processing NIPS
  • [15] [Anonymous], 2002, CHI'02 Extended Abstracts on Human Factors in Computing Systems, DOI [DOI 10.1145/506443.506619, 10.1145/506443.506619]
  • [16] Antognini D, 2021, PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, P515
  • [17] Evaluating Robustness of Counterfactual Explanations
    Artelt, Andre
    Vaquet, Valerie
    Velioglu, Riza
    Hinder, Fabian
    Brinkrolf, Johannes
    Schilling, Malte
    Hammer, Barbara
    [J]. 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [18] Attenberg J, 2010, LECT NOTES ARTIF INT, V6321, P40, DOI 10.1007/978-3-642-15880-3_9
  • [19] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [20] Baehrens D, 2010, J MACH LEARN RES, V11, P1803