Visual domain adaptation via transfer feature learning

被引:0
作者
Jafar Tahmoresnezhad
Sattar Hashemi
机构
[1] Shiraz University,CSE and IT Department
来源
Knowledge and Information Systems | 2017年 / 50卷
关键词
Transfer learning; Unsupervised domain adaptation; Domain invariant clustering; Domain shift; Invariant feature representation;
D O I
暂无
中图分类号
学科分类号
摘要
One of the serious challenges in computer vision and image classification is learning an accurate classifier for a new unlabeled image dataset, considering that there is no available labeled training data. Transfer learning and domain adaptation are two outstanding solutions that tackle this challenge by employing available datasets, even with significant difference in distribution and properties, and transfer the knowledge from a related domain to the target domain. The main difference between these two solutions is their primary assumption about change in marginal and conditional distributions where transfer learning emphasizes on problems with same marginal distribution and different conditional distribution, and domain adaptation deals with opposite conditions. Most prior works have exploited these two learning strategies separately for domain shift problem where training and test sets are drawn from different distributions. In this paper, we exploit joint transfer learning and domain adaptation to cope with domain shift problem in which the distribution difference is significantly large, particularly vision datasets. We therefore put forward a novel transfer learning and domain adaptation approach, referred to as visual domain adaptation (VDA). Specifically, VDA reduces the joint marginal and conditional distributions across domains in an unsupervised manner where no label is available in test set. Moreover, VDA constructs condensed domain invariant clusters in the embedding representation to separate various classes alongside the domain transfer. In this work, we employ pseudo target labels refinement to iteratively converge to final solution. Employing an iterative procedure along with a novel optimization problem creates a robust and effective representation for adaptation across domains. Extensive experiments on 16 real vision datasets with different difficulties verify that VDA can significantly outperform state-of-the-art methods in image classification problem.
引用
收藏
页码:585 / 605
页数:20
相关论文
共 55 条
[1]  
Gopalan R(2014)Unsupervised adaptation across domain shifts by generating intermediate data representations IEEE Trans Pattern Anal Mach Intell 36 2288-2302
[2]  
Li R(2015)Unsupervised domain adaptation via representation learning and adaptive classifier learning Neurocomputing 165 300-311
[3]  
Chellappa R(2014)Learning kernels for unsupervised domain adaptation with applications to visual object recognition Int J Comput Vision 109 3-27
[4]  
Gheisari M(2013)A generalized fisher based feature extraction method for domain shift Pattern Recognit 46 2510-2518
[5]  
Baghshah MS(2007)Analysis of representations for domain adaptation Adv Neural Inf Process Syst 19 137-682
[6]  
Gong B(2008)Transfer learning via dimensionality reduction AAAI 8 677-1802
[7]  
Grauman K(2012)Knowledge transfer with low-quality data: a feature extraction issue IEEE Trans Knowl Data Eng 24 1789-207
[8]  
Sha F(2015)A generalized kernel-based random k-sample sets method for transfer learning Iran J Sci Technol Trans Electrical Eng 39 193-23
[9]  
Cuong V(2015)Transfer learning using computational intelligence: a survey Knowl Based Syst 80 14-1359
[10]  
Duin RPW(2010)A survey on transfer learning IEEE Trans Knowl Data Eng 22 1345-210