Unsupervised Cross-Media Retrieval Using Domain Adaptation With Scene Graph

被引:45
作者
Peng, Yuxin [1 ]
Chi, Jingze [1 ]
机构
[1] Peking Univ, Wangxuan Inst Comp Technol, Beijing 100871, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-media retrieval; domain adaptation; unsupervised learning; scene graph;
D O I
10.1109/TCSVT.2019.2953692
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Existing cross-media retrieval methods are usually conducted under the supervised setting, which need lots of annotated training data. Generally, it is extremely labor-consuming to annotate cross-media data. So unsupervised cross-media retrieval is highly demanded, which is very challenging because it has to handle heterogeneous distributions across different media types without any annotated information. To address the above challenge, this paper proposes Domain Adaptation with Scene Graph (DASG) approach, which transfers knowledge from the source domain to improve cross-media retrieval in the target domain. Our DASG approach takes Visual Genome as the source domain, which contains image knowledge in the form of scene graph. The main contributions of this paper are as follows: First, we propose to address unsupervised cross-media retrieval by domain adaptation. Instead of using the labor-consuming annotated information of cross-media data in the training stage, our DASG approach learns cross-media correlation knowledge from Visual Genome, and then transfers the knowledge to cross-media retrieval through media alignment and distribution alignment. Second, our DASG approach utilizes fine-grained information via scene graph representation to enhance generalization capability across domains. The generated scene graph representation builds (subject -> relationship -> object) triplets by exploiting objects and relationships within image and text, which makes the cross-media correlation more precise and promotes unsupervised cross-media retrieval. Third, we exploit the related tasks including object and relationship detection for learning more discriminative features across domains. Leveraging the semantic information of objects and relationships improves cross-media correlation learning for retrieval. Experiments on two widely-used cross-media retrieval datasets, namely Flickr-30K and MS-COCO, show the effectiveness of our DASG approach.
引用
收藏
页码:4368 / 4379
页数:12
相关论文
共 41 条
[1]   SPICE: Semantic Propositional Image Caption Evaluation [J].
Anderson, Peter ;
Fernando, Basura ;
Johnson, Mark ;
Gould, Stephen .
COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 :382-398
[2]  
[Anonymous], 2003, P 11 ACM INT C MULT
[3]  
[Anonymous], 2014, T ASSOC COMPUT LING
[4]  
[Anonymous], 2010, P 18 ACM INT C MULT, DOI 10.1145/1873951.1873987
[5]  
[Anonymous], 2016, ADV NEUR IN
[6]  
[Anonymous], 2014, UNSUPERVISED DOMAIN
[7]   Detecting Visual Relationships with Deep Relational Networks [J].
Dai, Bo ;
Zhang, Yuqi ;
Lin, Dahua .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :3298-3308
[8]   Domain Transfer Multiple Kernel Learning [J].
Duan, Lixin ;
Tsang, Ivor W. ;
Xu, Dong .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (03) :465-479
[9]   Cross-modal Retrieval with Correspondence Autoencoder [J].
Feng, Fangxiang ;
Wang, Xiaojie ;
Li, Ruifan .
PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, :7-16
[10]   Deep Reconstruction-Classification Networks for Unsupervised Domain Adaptation [J].
Ghifary, Muhammad ;
Kleijn, W. Bastiaan ;
Zhang, Mengjie ;
Balduzzi, David ;
Li, Wen .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :597-613