Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models

被引:324
作者
Gu, Jiuxiang [1 ]
Cai, Jianfei [2 ]
Joty, Shafiq [2 ]
Niu, Li [3 ]
Wang, Gang [4 ]
机构
[1] Nanyang Technol Univ, Interdisciplinary Grad Sch, ROSE Lab, Singapore, Singapore
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[3] Rice Univ, Houston, TX 77251 USA
[4] Alibaba AI Labs, Hangzhou, Zhejiang, Peoples R China
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/CVPR.2018.00750
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset.
引用
收藏
页码:7181 / 7189
页数:9
相关论文
共 38 条
[1]  
[Anonymous], 2002, ACL
[2]  
[Anonymous], 2015, CVPR
[3]  
[Anonymous], 2015, ICCV
[4]  
[Anonymous], 2014, ECCV
[5]  
[Anonymous], 2013, P NIPS
[6]  
[Anonymous], 2015, CVPR
[7]  
[Anonymous], 2014, P 28 INT C NEUR INF
[8]  
[Anonymous], 2016, ICLR
[9]  
[Anonymous], 2017, ICCV
[10]  
[Anonymous], CVPR