VCNet: A Self-explaining Model for Realistic Counterfactual Generation

被引:5
作者
Guyomard, Victor [1 ,2 ]
Fessant, Fracoise [1 ]
Guyet, Thomas [3 ]
Bouadi, Tassadit [2 ]
Termier, Alexandre [2 ]
机构
[1] Orange Labs, Lannion, France
[2] Univ Rennes, CNRS, Inria, IRISA, Rennes, France
[3] Ctr Lyon, Inria, Villeurbanne, France
来源
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT I | 2023年 / 13713卷
关键词
Interpretability; Counterfactual explanation; Realistic counterfactuals; Join training; Conditional VAE; Generative network;
D O I
10.1007/978-3-031-26387-3_27
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Counterfactual explanation is a common class of methods to make local explanations of machine learning decisions. For a given instance, these methods aim to find the smallest modification of feature values that changes the predicted decision made by a machine learning model. One of the challenges of counterfactual explanation is the efficient generation of realistic counterfactuals. To address this challenge, we propose VCNet - Variational Counter Net - a model architecture that combines a predictor and a counterfactual generator that are jointly trained, for regression or classification tasks. VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem. Our contribution is the generation of counterfactuals that are close to the distribution of the predicted class. This is done by learning a variational autoencoder conditionally to the output of the predictor in a join-training fashion. We present an empirical evaluation on tabular datasets and across several interpretability metrics. The results are competitive with the state-of-the-art method.
引用
收藏
页码:437 / 453
页数:17
相关论文
共 26 条
[1]  
Alvarez-Melis D, 2018, ADV NEUR IN, V31
[2]   A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data [J].
Barbosa de Oliveira, Raphael Mazzine ;
Martens, David .
APPLIED SCIENCES-BASEL, 2021, 11 (16)
[3]  
Barr B, 2021, Arxiv, DOI [arXiv:2112.00890, 10.48550/ARXIV.2112.00890]
[4]  
Blake C., 1998, UCI REPOSITORY MACHI
[5]  
Cortez P, 2008, 15TH EUROPEAN CONCURRENT ENGINEERING CONFERENCE/5TH FUTURE BUSINESS TECHNOLOGY CONFERENCE, P5
[6]  
Downs Michael, 2020, ICML WHI 2020, V2020, P1
[7]  
Elton Daniel C., 2020, Artificial General Intelligence. 13th International Conference, AGI 2020. Proceedings. Lecture Notes in Artificial Intelligence. Subseries of Lecture Notes in Computer Science (LNAI 12177), P95, DOI 10.1007/978-3-030-52152-3_10
[8]  
FICO, 2018, EXPL MACH LEARN CHAL
[9]  
Guo H., 2021, ICML WORKSHOP ALGORI
[10]  
John V, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P424