Adversarial task-specific learning

被引:0
|
作者
Fu, Xin [1 ,2 ]
Zhao, Yao [1 ,2 ]
Liu, Ting [1 ,2 ]
Wei, Yunchao [3 ]
Li, Jianan [4 ]
Wei, Shikui [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Key Lab Adv Informat Sci & Network Techno, Beijing 100044, Peoples R China
[3] Univ Illinois, Beckman Inst, Urbana, IL USA
[4] Beijing Inst Technol, Sch Opt Engn, Beijing 100081, Peoples R China
基金
美国国家科学基金会;
关键词
Cross-modal retrieval; Adversarial learning; Subspace learning;
D O I
10.1016/j.neucom.2019.06.079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we investigate a principle way to learn a common feature space for data of different modalities (e.g. image and text), so that the similarity between different modal items can be directly measured for benefiting cross-modal retrieval task. To effectively keep semantic/distribution consistent for common feature embeddings, we propose a new Adversarial Task-Specific Learning (ATSL) approach to learn distinct embeddings for different retrieval tasks, i.e. images retrieve texts (I2T) or texts retrieve images (T2I). In particular, the proposed ATSL is with the following advantages: (a) semantic attributes are leveraged to encourage the learned common feature embeddings of couples to be semantic consistent; (b) adversarial learning is applied to relieve the inconsistent distribution of common feature embeddings for different modalities; (c) triplet optimization is employed to guarantee that similar items from different modalities are with smaller distances in the learned common space compared with the dissimilar ones; (d) task-specific learning produces better optimized common feature embeddings for different retrieval tasks. Our ATSL is embedded in a deep neural network, which can be learned in an end-to-end manner. We conduct extensive experiments on two popular benchmark datasets, e.g. Flickr30K and MS COCO. We achieve R@1 accuracy of 57.1% and 38.4% for I2T and 56.5% and 38.6% T2I on MS COCO and Flickr30K respectively, which are the new state-of-the-arts. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:118 / 128
页数:11
相关论文
empty
未找到相关数据