Adaptive Label-Aware Graph Convolutional Networks for Cross-Modal Retrieval

被引:34
作者
Qian, Shengsheng [1 ,2 ]
Xue, Dizhan [1 ,2 ]
Fang, Quan [1 ,2 ]
Xu, Changsheng [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518055, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Correlation; Semantics; Task analysis; Adaptation models; Adaptive systems; Birds; Oceans; Cross-modal retrieval; Deep learning; Graph convolutional networks;
D O I
10.1109/TMM.2021.3101642
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The cross-modal retrieval task has raised continuous attention in recent years with the increasing scale of multi-modal data, which has broad application prospects including multimedia data management and intelligent search engine. Most existing methods mainly project data of different modalities into a common representation space where label information is often exploited to distinguish samples from different semantic categories. However, they typically treat each label as an independent individual and ignore the underlying semantic structure of labels. In this paper, we propose an end-to-end adaptive label-aware graph convolutional network (ALGCN) by designing both the instance representation learning branch and the label representation learning branch, which can obtain modality-invariant and discriminative representations for cross-modal retrieval. Firstly, we construct an instance representation learning branch to transform instances of different modalities into a common representation space. Secondly, we adopt Graph Convolutional Network (GCN) to learn inter-dependent classifiers in the label representation learning branch. In addition, a novel adaptive correlation matrix is proposed to efficiently explore and preserve the semantic structure of labels in a data-driven manner. Together with a robust self-supervision loss for GCN, the GCN model can be supervised to learn an effective and robust correlation matrix for feature propagation. Comprehensive experimental results on three benchmark datasets, NUS-WIDE, MIRFlickr and MS-COCO, demonstrate the superiority of ALGCN, compared with the state-of-the-art methods in cross-modal retrieval.
引用
收藏
页码:3520 / 3532
页数:13
相关论文
共 64 条
[1]  
Akaho Shotaro, 2001, P INT M PSYCH SOC IM, V40, P263
[2]  
Andrew G., 2013, INT C MACHINE LEARNI, P1247
[3]  
[Anonymous], 2016, Fasttext.zip: Compressing text classification models
[4]  
[Anonymous], 2014, Comput. Sci.
[5]  
[Anonymous], INT C LEARNING REPRE
[6]   Multimedia Databases and Data Management: A Survey [J].
Chen, Shu-Ching .
INTERNATIONAL JOURNAL OF MULTIMEDIA DATA ENGINEERING & MANAGEMENT, 2010, 1 (01) :1-11
[7]   Multi-Label Image Recognition with Graph Convolutional Networks [J].
Chen, Zhao-Min ;
Wei, Xiu-Shen ;
Wang, Peng ;
Guo, Yanwen .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5172-5181
[8]  
Chua T.-S., 2009, ACM INT C IM VID RET, P1
[9]   Progressive Cross-Modal Semantic Network for Zero-Shot Sketch-Based Image Retrieval [J].
Deng, Cheng ;
Xu, Xinxun ;
Wang, Hao ;
Yang, Muli ;
Tao, Dacheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :8892-8902
[10]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848