Learning transferable and discriminative features for unsupervised domain adaptation

被引:6
作者
Du, Yuntao [1 ]
Zhang, Ruiting [1 ]
Zhang, Xiaowen [1 ]
Yao, Yirong [1 ]
Lu, Hengyang [1 ,2 ]
Wang, Chongjun [1 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210023, Jiangsu, Peoples R China
[2] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Transfer learning; unsupervised domain adaptation; discriminative feature; REGULARIZATION; FRAMEWORK; ALIGNMENT; KERNEL;
D O I
10.3233/IDA-215813
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although achieving remarkable progress, it is very difficult to induce a supervised classifier without any labeled data. Unsupervised domain adaptation is able to overcome this challenge by transferring knowledge from a labeled source domain to an unlabeled target domain. Transferability and discriminability are two key criteria for characterizing the superiority of feature representations to enable successful domain adaptation. In this paper, a novel method called learning TransFerable and Discriminative Features for unsupervised domain adaptation (TFDF) is proposed to optimize these two objectives simultaneously. On the one hand, distribution alignment is performed to reduce domain discrepancy and learn more transferable representations. Instead of adopting Maximum Mean Discrepancy (MMD) which only captures the first-order statistical information to measure distribution discrepancy, we adopt a recently proposed statistic called Maximum Mean and Covariance Discrepancy (MMCD), which can not only capture the first-order statistical information but also capture the second-order statistical information in the reproducing kernel Hilbert space (RKHS). On the other hand, we propose to explore both local discriminative information via manifold regularization and global discriminative information via minimizing the proposed class confusion objective to learn more discriminative features, respectively. We integrate these two objectives into the Structural Risk Minimization (RSM) framework and learn a domain-invariant classifier. Comprehensive experiments are conducted on five real-world datasets and the results verify the effectiveness of the proposed method.
引用
收藏
页码:407 / 425
页数:19
相关论文
共 47 条
[1]  
[Anonymous], 2001, NIPS
[2]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[3]   A theory of learning from different domains [J].
Ben-David, Shai ;
Blitzer, John ;
Crammer, Koby ;
Kulesza, Alex ;
Pereira, Fernando ;
Vaughan, Jennifer Wortman .
MACHINE LEARNING, 2010, 79 (1-2) :151-175
[4]  
Bousquet O., 2003, STAT LEARNING THEORY
[5]  
Cao Y, 2018, AAAI CONF ARTIF INTE, P2795
[6]  
Chen C, 2019, AAAI CONF ARTIF INTE, P3296
[7]  
Chen XY, 2019, PR MACH LEARN RES, V97
[8]  
Dai Wenyuan, 2007, ICML 07
[9]  
Donahue J, 2014, PR MACH LEARN RES, V32
[10]  
Fukunaga K, 1972, Introduction to statistical pattern recognition