Context-Aware Multi-View Summarization Network for Image-Text Matching

被引:107
作者
Qu, Leigang [1 ]
Liu, Meng [2 ]
Cao, Da [3 ]
Nie, Liqiang [1 ]
Tian, Qi [4 ]
机构
[1] Shandong Univ, Qingdao, Peoples R China
[2] Shandong Jianzhu Univ, Qingdao, Peoples R China
[3] Hunan Univ, Changsha, Peoples R China
[4] Huawei Cloud & AI, Changsha, Peoples R China
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Image-Text Matching; Cross-Modal Retrieval; Multi-View Summarization; Context Modeling; LANGUAGE;
D O I
10.1145/3394171.3413961
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image-text matching is a vital yet challenging task in the field of multimedia analysis. Over the past decades, great efforts have been made to bridge the semantic gap between the visual and textual modalities. Despite the significance and value, most prior work is still confronted with a multi-view description challenge, i.e., how to align an image to multiple textual descriptions with semantic diversity. Toward this end, we present a novel context-aware multi-view summarization network to summarize context-enhanced visual region information from multiple views. To be more specific, we design an adaptive gating self-attention module to extract representations of visual regions and words. By controlling the internal information flow, we are able to adaptively capture context information. Afterwards, we introduce a summarization module with a diversity regularization to aggregate region-level features into image-level ones from different perspectives. Ultimately, we devise a multi-view matching scheme to match multi-view image features with corresponding text ones. To justify our work, we have conducted extensive experiments on two benchmark datasets, Le., Flickr30K and MS-COCO, which demonstrates the superiority of our model as compared to several state-of-the-art baselines.
引用
收藏
页码:1047 / 1055
页数:9
相关论文
共 44 条
[21]   Leveraging Visual Question Answering for Image-Caption Ranking [J].
Lin, Xiao ;
Parikh, Devi .
COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 :261-277
[22]   Focus Your Attention: A Bidirectional Focal Attention Network for Image-Text Matching [J].
Liu, Chunxiao ;
Mao, Zhendong ;
Liu, An-An ;
Zhang, Tianzhu ;
Wang, Bin ;
Zhang, Yongdong .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :3-11
[23]   Cross-modal Moment Localization in Videos [J].
Liu, Meng ;
Wang, Xiang ;
Nie, Liqiang ;
Tian, Qi ;
Chen, Baoquan ;
Chua, Tat-Seng .
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, :843-851
[24]  
Lu JS, 2016, ADV NEUR IN, V29
[25]  
Mikolov T, 2013, INT C LEARN REPR
[26]   Dual Attention Networks for Multimodal Reasoning and Matching [J].
Nam, Hyeonseob ;
Ha, Jung-Woo ;
Kim, Jeonghee .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2156-2164
[27]  
Rajpurkar Pranav, 2016, P 2016 C EMP METH NA
[28]   Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J].
Ren, Shaoqing ;
He, Kaiming ;
Girshick, Ross ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1137-1149
[29]  
Rush Alexander M., 2015, Proc. EMNLP, P379
[30]   Adversarial Representation Learning for Text-to-Image Matching [J].
Sarafianos, Nikolaos ;
Xu, Xiang ;
Kakadiaris, Ioannis A. .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :5813-5823