Cross-modal multi-relationship aware reasoning for image-text matching

被引:2
作者
Zhang, Jin [1 ]
He, Xiaohai [1 ]
Qing, Linbo [1 ]
Liu, Luping [1 ]
Luo, Xiaodong [1 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610064, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Image-text matching; Visual multi-relationship; Graph neural network; Cross-modal retrieval; LANGUAGE; NETWORK;
D O I
10.1007/s11042-020-10466-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-modal image-text matching has attracted considerable interest in both computer vision and natural language processing communities. The main issue of image-text matching is to learn the compact cross-modal representations and the correlation between image and text representations. However, the image-text matching task has two major challenges. First, the current image representation methods focus on the semantic information and disregard the spatial position relations between image regions. Second, most existing methods pay little attention to improving textual representation which plays a significant role in image-text matching. To address these issues, we designed a decipherable cross-modal multi-relationship aware reasoning network (CMRN) for image-text matching. In particular, a new method is proposed to extract multi-relationship and to learn the correlations between image regions, including two kinds of visual relations: the geometric position relation and semantic interaction. In addition, images are processed as graphs, and a novel spatial relation encoder is introduced to perform reasoning on the graphs by employing a graph convolutional network (GCN) with attention mechanism. Thereafter, a contextual text encoder based on Bidirectional Encoder Representations from Transformers is adopted to learn distinctive textual representations. To verify the effectiveness of the proposed model, extensive experiments were conducted on two public datasets, namely MSCOCO and Flickr30K. The experimental results show that CMRN achieved superior performance when compared with state-of-the-art methods. On Flickr30K, the proposed method outperforms state-of-the-art methods more than 7.4% in text retrieval from image query, and 5.0% relatively in image retrieval with text query (based on Recall@1). On MSCOCO, the performance reaches 73.9% for text retrieval and 60.4% for image retrieval (based on Recall@1).
引用
收藏
页码:12005 / 12027
页数:23
相关论文
共 57 条
[21]   Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations [J].
Krishna, Ranjay ;
Zhu, Yuke ;
Groth, Oliver ;
Johnson, Justin ;
Hata, Kenji ;
Kravitz, Joshua ;
Chen, Stephanie ;
Kalantidis, Yannis ;
Li, Li-Jia ;
Shamma, David A. ;
Bernstein, Michael S. ;
Li Fei-Fei .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 123 (01) :32-73
[22]   Stacked Cross Attention for Image-Text Matching [J].
Lee, Kuang-Huei ;
Chen, Xi ;
Hua, Gang ;
Hu, Houdong ;
He, Xiaodong .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :212-228
[23]   Visual Semantic Reasoning for Image-Text Matching [J].
Li, Kunpeng ;
Zhang, Yulun ;
Li, Kai ;
Li, Yuanyuan ;
Fu, Yun .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4653-4661
[24]   Relation-Aware Graph Attention Network for Visual Question Answering [J].
Li, Linjie ;
Gan, Zhe ;
Cheng, Yu ;
Liu, Jingjing .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :10312-10321
[25]   Identity-Aware Textual-Visual Matching with Latent Co-attention [J].
Li, Shuang ;
Xiao, Tong ;
Li, Hongsheng ;
Yang, Wei ;
Wang, Xiaogang .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1908-1917
[26]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[27]   Leveraging Visual Question Answering for Image-Caption Ranking [J].
Lin, Xiao ;
Parikh, Devi .
COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 :261-277
[28]   Focus Your Attention: A Bidirectional Focal Attention Network for Image-Text Matching [J].
Liu, Chunxiao ;
Mao, Zhendong ;
Liu, An-An ;
Zhang, Tianzhu ;
Wang, Bin ;
Zhang, Yongdong .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :3-11
[29]   CycleMatch: A cycle-consistent embedding network for image-text matching [J].
Liu, Yu ;
Guo, Yanming ;
Liu, Li ;
Bakker, Erwin M. ;
Lew, Michael S. .
PATTERN RECOGNITION, 2019, 93 :365-379
[30]  
Lu JS, 2019, ADV NEUR IN, V32