Domain Adaptation via Transfer Component Analysis

被引:3307
作者
Pan, Sinno Jialin [1 ]
Tsang, Ivor W. [2 ]
Kwok, James T. [3 ]
Yang, Qiang [3 ]
机构
[1] Inst Infocomm Res, Singapore 138632, Singapore
[2] Nanyang Technol Univ, Sch Comp Engn, Singapore 639798, Singapore
[3] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Hong Kong, Peoples R China
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2011年 / 22卷 / 02期
关键词
Dimensionality reduction; domain adaptation; Hilbert space embedding of distributions; transfer learning; KERNEL; FRAMEWORK;
D O I
10.1109/TNN.2010.2091281
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.
引用
收藏
页码:199 / 210
页数:12
相关论文
共 37 条
  • [1] Ando RK, 2005, J MACH LEARN RES, V6, P1817
  • [2] [Anonymous], 2009, P SIAM INT C DAT MIN
  • [3] [Anonymous], 2006, BOOK REV IEEE T NEUR
  • [4] [Anonymous], 2004, P 21 INT C MACH LEAR
  • [5] [Anonymous], 2007, Proceedings of the 20th International Conference on Neural Information Processing Systems
  • [6] [Anonymous], P 2006 C ADV NEUR IN
  • [7] Belkin M, 2006, J MACH LEARN RES, V7, P2399
  • [8] Ben-David Shai, 2007, NIPS
  • [9] Bickel S, 2009, J MACH LEARN RES, V10, P2137
  • [10] Blitzer J., 2006, P C EMPIRICAL METHOD, P120, DOI DOI 10.3115/1610075.1610094