On Attribution of Recurrent Neural Network Predictions via Additive Decomposition

被引:35
作者
Du, Mengnan [1 ]
Liu, Ninghao [1 ]
Yang, Fan [1 ]
Ji, Shuiwang [1 ]
Hu, Xia [1 ]
机构
[1] Texas A&M Univ, Dept Comp Sci & Engn, College Stn, TX 77843 USA
来源
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019) | 2019年
关键词
Deep learning interpretation; Recurrent neural network; Text classification; Sentiment analysis;
D O I
10.1145/3308558.3313545
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
RNN models have achieved the state-of-the-art performance in a wide range of text mining tasks. However, these models are often regarded as black-boxes and are criticized due to the lack of interpretability. In this paper, we enhance the interpretability of RNNs by providing interpretable rationales for RNN predictions. Nevertheless, interpreting RNNs is a challenging problem. Firstly, unlike existing methods that rely on local approximation, we aim to provide rationales that are more faithful to the decision making process of RNN models. Secondly, a flexible interpretation method should be able to assign contribution scores to text segments of varying lengths, instead of only to individual words. To tackle these challenges, we propose a novel attribution method, called REAT, to provide interpretations to RNN predictions. REAT decomposes the final prediction of a RNN into additive contribution of each word in the input text. This additive decomposition enables REAT to further obtain phrase-level attribution scores. In addition, REAT is generally applicable to various RNN architectures, including GRU, LSTM and their bidirectional versions. Experimental results demonstrate the faithfulness and interpretability of the proposed attribution method. Comprehensive analysis shows that our attribution method could unveil the useful linguistic knowledge captured by RNNs. Some analysis further demonstrates our method could be utilized as a debugging tool to examine the vulnerability and failure reasons of RNNs, which may lead to several promising future directions to promote generalization ability of RNNs.
引用
收藏
页码:383 / 393
页数:11
相关论文
共 46 条
[1]  
[Anonymous], 55 ANN M ASS COMP LI
[2]  
[Anonymous], P 24 ACM SIGKDD INT
[3]  
[Anonymous], AAAI C ART INT AAAI
[4]  
[Anonymous], AMIA ANN S P
[5]  
[Anonymous], 55 ANN M 2017 ASS CO
[6]  
[Anonymous], INT C LEARN REPR ICL
[7]  
[Anonymous], ACM INT C WEB SEARCH
[8]  
[Anonymous], 56 ANN M ASS COMP LI
[9]  
[Anonymous], ICLR WORKSH
[10]  
[Anonymous], INT JOINT C ART INT