Bi-Directional Spatial-Semantic Attention Networks for Image-Text Matching

被引:78
作者
Huang, Feiran [1 ]
Zhang, Xiaoming [2 ]
Zhao, Zhonghua [3 ]
Li, Zhoujun [4 ]
机构
[1] Beihang Univ, Beijing Key Lab Network Technol, Beijing 100191, Peoples R China
[2] Beihang Univ, Sch Cyber Sci & Technol, Beijing 100191, Peoples R China
[3] Coordinat Ctr China, Natl Comp Emergency Tech Team, Beijing 100029, Peoples R China
[4] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Image-text matching; attention networks; deep learning; spatial-semantic;
D O I
10.1109/TIP.2018.2882225
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image-text matching by deep models has recently made remarkable achievements in many tasks, such as image caption and image search. A major challenge of matching the image and text lies in that they usually have complicated underlying relations between them and simply modeling the relations may lead to suboptimal performance. In this paper, we develop a novel approach bi-directional spatial-semantic attention network, which leverages both the word to regions (W2R) relation and visual object to words (O2W) relation in a holistic deep framework for more effectively matching. Specifically, to effectively encode the W2R relation, we adopt LSTM with bilinear attention function to infer the image regions which are more related to the particular words, which is referred as the W2R attention networks. On the other side, the O2W attention networks are proposed to discover the semantically close words for each visual object in the image, i.e., the visual O2W relation. Then, a deep model unifying both of the two directional attention networks into a holistic learning framework is proposed to learn the matching scores of image and text pairs. Compared to the existing image-text matching methods, our approach achieves state-of-the-art performance on the datasets of Flickr30K and MSCOCO.
引用
收藏
页码:2008 / 2020
页数:13
相关论文
共 53 条
[1]  
Andrienko G., 2013, Introduction, P1
[2]  
[Anonymous], 2014, CORR
[3]  
[Anonymous], 2014, CORR
[4]  
[Anonymous], 2016, CORR
[5]  
[Anonymous], PROC CVPR IEEE
[6]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[7]  
[Anonymous], 2014, CORR
[8]  
[Anonymous], 2017, CORR
[9]  
[Anonymous], 2016, CORR
[10]  
[Anonymous], IEEE T PATTERN ANAL