Learning Credible Deep Neural Networks with Rationale Regularization

被引:23
作者
Du, Mengnan [1 ]
Liu, Ninghao [1 ]
Yang, Fan [1 ]
Hu, Xia [1 ]
机构
[1] Texas A&M Univ, Dept Comp Sci & Engn, College Stn, TX 77843 USA
来源
2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019) | 2019年
关键词
Deep neural network; Explainability; Credibility; Expert rationales;
D O I
10.1109/ICDM.2019.00025
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent explainability related studies have shown that state-of-the-art DNNs do not always adopt correct evidences to make decisions. It not only hampers their generalization but also makes them less likely to be trusted by end-users. In pursuit of developing more credible DNNs, in this paper we propose CREX, which encourages DNN models to focus more on evidences that actually matter for the task at hand, and to avoid overfitting to data-dependent bias and artifacts. Specifically, CREX regularizes the training process of DNNs with rationales, i.e., a subset of features highlighted by domain experts as justifications for predictions, to enforce DNNs to generate local explanations that conform with expert rationales. Even when rationales are not available, CREX still could be useful by requiring the generated explanations to be sparse. Experimental results on two text classification datasets demonstrate the increased credibility of DNNs trained with CREX. Comprehensive analysis further shows that while CREX does not always improve prediction accuracy on the hold-out test set, it significantly increases DNN accuracy on new and previously unseen data beyond test set, highlighting the advantage of the increased credibility.
引用
收藏
页码:150 / 159
页数:10
相关论文
共 43 条
[1]   Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering [J].
Agrawal, Aishwarya ;
Batra, Dhruv ;
Parikh, Devi ;
Kembhavi, Aniruddha .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4971-4980
[2]  
[Anonymous], 2016, Understanding neural networks through representation erasure
[3]  
Bao YJ, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P1903
[4]  
Barrett M., 2018, P 22 C COMP NAT LANG, P302, DOI [10.18653/v1/k18-, DOI 10.18653/V1/K18-1030]
[5]  
Bolukbasi T, 2016, ADV NEUR IN, V29
[6]  
Doshi-Velez Finale, 2017, ARXIV
[7]   Techniques for Interpretable Machine Learning [J].
Du, Mengnan ;
Li, Ninghao ;
Hu, Xia .
COMMUNICATIONS OF THE ACM, 2020, 63 (01) :68-77
[8]   On Attribution of Recurrent Neural Network Predictions via Additive Decomposition [J].
Du, Mengnan ;
Liu, Ninghao ;
Yang, Fan ;
Ji, Shuiwang ;
Hu, Xia .
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, :383-393
[9]   Towards Explanation of DNN-based Prediction with Guided Feature Inversion [J].
Du, Mengnan ;
Liu, Ninghao ;
Song, Qingquan ;
Hu, Xia .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :1358-1367
[10]  
Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]