Generally-Occurring Model Change for Robust Counterfactual Explanations

被引:0
作者
Xu, Ao [1 ]
Wu, Tieru [1 ]
机构
[1] Jilin Univ, Sch Artificial Intelligence, Changchun, Peoples R China
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT IV | 2024年 / 15019卷
关键词
Explainable Artificial Intelligence; Counterfactual Explanation;
D O I
10.1007/978-3-031-72341-4_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the increasing impact of algorithmic decision-making on human lives, the interpretability of models has become a critical issue in machine learning. Counterfactual explanation is an important method in the field of interpretable machine learning, which can not only help users understand why machine learning models make specific decisions, but also help users understand how to change these decisions. Naturally, it is an important task to study the robustness of counterfactual explanation generation algorithms to model changes. Previous literature has proposed the concept of Naturally-Occurring Model Change, which has given us a deeper understanding of robustness to model change. In this paper, we first further generalize the concept of Naturally-Occurring Model Change, proposing a more general concept of model parameter changes, Generally-Occurring Model Change, which has a wider range of applicability. We also prove the corresponding probabilistic guarantees. In addition, we consider a more specific problem, data set perturbation, and give relevant theoretical results by combining optimization theory.
引用
收藏
页码:215 / 229
页数:15
相关论文
共 28 条
  • [1] Albini Emanuele, 2022, FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, P1054, DOI 10.1145/3531146.3533168
  • [2] Evaluating Robustness of Counterfactual Explanations
    Artelt, Andre
    Vaquet, Valerie
    Velioglu, Riza
    Hinder, Fabian
    Brinkrolf, Johannes
    Schilling, Malte
    Hammer, Barbara
    [J]. 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [3] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [4] Beckers S., 2022, PMLR, P90
  • [5] Black E., 2021, arXiv
  • [6] LOF: Identifying density-based local outliers
    Breunig, MM
    Kriegel, HP
    Ng, RT
    Sander, J
    [J]. SIGMOD RECORD, 2000, 29 (02) : 93 - 104
  • [7] NICE: an algorithm for nearest instance counterfactual explanations
    Brughmans, Dieter
    Leyman, Pieter
    Martens, David
    [J]. DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (05) : 2665 - 2703
  • [8] The Robustness of Counterfactual Explanations Over Time
    Ferrario, Andrea
    Loi, Michele
    [J]. IEEE ACCESS, 2022, 10 : 82736 - 82750
  • [9] Automated Counterfactual Generation in Financial Model Risk Management
    Gan, Jingwei
    Zhang, Shinan
    Zhang, Chi
    Li, Andy
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 4064 - 4068