Multimodal Explanations: Justifying Decisions and Pointing to the Evidence

被引:229
作者
Park, Dong Huk [1 ]
Hendricks, Lisa Anne [1 ]
Akata, Zeynep [2 ,3 ]
Rohrbach, Anna [1 ,3 ]
Schiele, Bernt [3 ]
Darrell, Trevor [1 ]
Rohrbach, Marcus [4 ]
机构
[1] Univ Calif Berkeley, EECS, Berkeley, CA 94720 USA
[2] Univ Amsterdam, Amsterdam, Netherlands
[3] MPI Informat, Saarbrucken, Germany
[4] Facebook AI Res, Menlo Pk, CA 94025 USA
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
D O I
10.1109/CVPR.2018.00915
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep models that are both effective and explainable are desirable in many settings; prior explainable models have been unimodal, offering either image-based visualization of attention weights or text-based generation of post-hoc justifications. We propose a multimodal approach to explanation, and argue that the two modalities provide complementary explanatory strengths. We collect two new datasets to define and evaluate this task, and propose a novel model which can provide joint textual rationale generation and attention visualization. Our datasets define visual and textual justifications of a classification decision for activity recognition tasks (ACT-X) and for visual question answering tasks (VQA-X). We quantitatively show that training with the textual explanations not only yields better textual justification models, but also better localizes the evidence that supports the decision. We also qualitatively show cases where visual explanation is more insightful than textual explanation, and vice versa, supporting our thesis that multimodal explanation models offer significant benefits over unimodal approaches.
引用
收藏
页码:8779 / 8788
页数:10
相关论文
共 42 条
[1]  
[Anonymous], 2012, ACM TOG
[2]  
[Anonymous], 2015, ICLR
[3]  
[Anonymous], 2016, P IEEE C COMP VIS PA
[4]  
[Anonymous], 2015, P INT C LEARN REPR I
[5]  
[Anonymous], 2016, P C EMP METH NAT LAN
[6]  
[Anonymous], 2017, ARXIV170403296
[7]  
[Anonymous], 2014, ECCV
[8]  
[Anonymous], P EUR C COMP VIS ECC
[9]  
[Anonymous], 2016, CoRR
[10]  
[Anonymous], 2014, EUR C COMP VIS ZUR S