MAttNet: Modular Attention Network for Referring Expression Comprehension

被引:627
作者
Yu, Licheng [1 ]
Lin, Zhe [2 ]
Shen, Xiaohui [2 ]
Yang, Jimei [2 ]
Lu, Xin [2 ]
Bansal, Mohit [1 ]
Berg, Tamara L. [1 ]
机构
[1] Univ N Carolina, Chapel Hill, NC 27515 USA
[2] Adobe Res, San Jose, CA USA
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR.2018.00142
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word/phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-the-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks. Demo1 and code2 are provided.
引用
收藏
页码:1307 / 1315
页数:9
相关论文
共 32 条
[1]  
[Anonymous], 2017, ICCV
[2]  
[Anonymous], ICCV
[3]  
[Anonymous], 2014, ECCV
[4]  
[Anonymous], ICML
[5]  
[Anonymous], 2015, Very Deep Convolu- tional Networks for Large-Scale Image Recognition
[6]  
[Anonymous], 2017, P ICCV
[7]  
[Anonymous], 2016, CVPR
[8]  
[Anonymous], 2016, NAACL HLT
[9]  
[Anonymous], 2016, CVPR
[10]  
[Anonymous], 2013, ACL 2013