Grounding Visual Explanations

被引:97
作者
Hendricks, Lisa Anne [1 ]
Hu, Ronghang [1 ]
Darrell, Trevor [1 ]
Akata, Zeynep [2 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Univ Amsterdam, Amsterdam, Netherlands
来源
COMPUTER VISION - ECCV 2018, PT II | 2018年 / 11206卷
关键词
Explainability; Counterfactuals; Grounding; Phrase correction;
D O I
10.1007/978-3-030-01216-8_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing visual explanation generating agents learn to fluently justify a class prediction. However, they may mention visual attributes which reflect a strong class prior, although the evidence may not actually be in the image. This is particularly concerning as ultimately such agents fail in building trust with human users. To overcome this limitation, we propose a phrase-critic model to refine generated candidate explanations augmented with flipped phrases which we use as negative examples while training. At inference time, our phrase-critic model takes an image and a candidate explanation as input and outputs a score indicating how well the candidate explanation is grounded in the image. Our explainable AI agent is capable of providing counter arguments for an alternative prediction, i.e. counterfactuals, along with explanations that justify the correct classification decisions. Our model improves the textual explanation quality of fine-grained classification decisions on the CUB dataset by mentioning phrases that are grounded in the image. Moreover, on the FOIL tasks, our agent detects when there is a mistake in the sentence, grounds the incorrect phrase and corrects it significantly better than other models.
引用
收藏
页码:269 / 286
页数:18
相关论文
共 34 条
[1]  
Andreas J., 2016, ARXIV160400562
[2]  
[Anonymous], 2017, ARXIV170102870
[3]  
[Anonymous], 2016, Grad-cam: Visual explanations from deep networks via gradient-based localization
[4]  
Biran O., 2014, P AUT WORKSH ICML, V2014
[5]  
Core M.G., 2006, NCAI
[6]   Long-Term Recurrent Convolutional Networks for Visual Recognition and Description [J].
Donahue, Jeff ;
Hendricks, Lisa Anne ;
Rohrbach, Marcus ;
Venugopalan, Subhashini ;
Guadarrama, Sergio ;
Saenko, Kate ;
Darrell, Trevor .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (04) :677-691
[7]   Interpretable Explanations of Black Boxes by Meaningful Perturbation [J].
Fong, Ruth C. ;
Vedaldi, Andrea .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3449-3457
[8]  
Girshick R., 2015, P IEEE INT C COMPUTE, P1440, DOI [10.1109/ICCV.2015.169, DOI 10.1109/ICCV.2015.169]
[9]   Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels [J].
Han, Bo ;
Yao, Quanming ;
Yu, Xingrui ;
Niu, Gang ;
Xu, Miao ;
Hu, Weihua ;
Tsang, Ivor W. ;
Sugiyama, Masashi .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
[10]   Generating Visual Explanations [J].
Hendricks, Lisa Anne ;
Akata, Zeynep ;
Rohrbach, Marcus ;
Donahue, Jeff ;
Schiele, Bernt ;
Darrell, Trevor .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :3-19