Domain Adaptation Transfer Learning by Kernel Representation Adaptation

被引:1
|
作者
Chen, Xiaoyi [1 ]
Lengelle, Regis [1 ]
机构
[1] Univ Technol Troyes, Charles Delaunay Inst, Res Team LM2S, CNRS,ROSAS Dept,UMR 6281, 12 Rue Marie Curie,CS 42060, F-10004 Troyes 10004, France
关键词
D O I
10.1007/978-3-319-93647-5_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain adaptation, where no labeled target data is available, is a challenging task. To solve this problem, we first propose a new SVM based approach with a supplementary Maximum Mean Discrepancy (MMD)-like constraint. With this heuristic, source and target data are projected onto a common subspace of a Reproducing Kernel Hilbert Space (RKHS) where both data distributions are expected to become similar. Therefore, a classifier trained on source data might perform well on target data, if the conditional probabilities of labels are similar for source and target data, which is the main assumption of this paper. We demonstrate that adding this constraint does not change the quadratic nature of the optimization problem, so we can use common quadratic optimization tools. Secondly, using the same idea that rendering source and target data similar might ensure efficient transfer learning, and with the same assumption, a Kernel Principal Component Analysis (KPCA) based transfer learning method is proposed. Different from the first heuristic, this second method ensures other higher order moments to be aligned in the RKHS, which leads to better performances. Here again, we select MMD as the similarity measure. Then, a linear transformation is also applied to further improve the alignment between source and target data. We finally compare both methods with other transfer learning methods from the literature to show their efficiency on synthetic and real datasets.
引用
收藏
页码:45 / 61
页数:17
相关论文
共 50 条
  • [41] Domain Adaptation with Representation Learning and Nonlinear Relation for Time Series
    Hussein, Amir
    Hajj, Hazem
    ACM Transactions on Internet of Things, 2022, 3 (02):
  • [42] COD: Learning Conditional Invariant Representation for Domain Adaptation Regression
    Yang, Hao-Ran
    Ren, Chuan-Xian
    Luo, You-Wei
    COMPUTER VISION - ECCV 2024, PT LXXVI, 2025, 15134 : 108 - 125
  • [43] Representation learning via an integrated autoencoder for unsupervised domain adaptation
    Zhu, Yi
    Wu, Xindong
    Qiang, Jipeng
    Yuan, Yunhao
    Li, Yun
    FRONTIERS OF COMPUTER SCIENCE, 2023, 17 (05)
  • [44] Progressive learning with style transfer for distant domain adaptation
    Xiang, Suncheng
    Fu, Yuzhuo
    Liu, Ting
    IET IMAGE PROCESSING, 2020, 14 (14) : 3527 - 3535
  • [45] Visual domain adaptation via transfer feature learning
    Jafar Tahmoresnezhad
    Sattar Hashemi
    Knowledge and Information Systems, 2017, 50 : 585 - 605
  • [46] Deep autoencoder based domain adaptation for transfer learning
    Dev, Krishna
    Ashraf, Zubair
    Muhuri, Pranab K.
    Kumar, Sandeep
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (16) : 22379 - 22405
  • [47] Deep autoencoder based domain adaptation for transfer learning
    Krishna Dev
    Zubair Ashraf
    Pranab K. Muhuri
    Sandeep Kumar
    Multimedia Tools and Applications, 2022, 81 : 22379 - 22405
  • [48] Learning and Domain Adaptation
    Mansour, Yishay
    DISCOVERY SCIENCE, PROCEEDINGS, 2009, 5808 : 32 - 34
  • [49] Learning and Domain Adaptation
    Mansour, Yishay
    ALGORITHMIC LEARNING THEORY, PROCEEDINGS, 2009, 5809 : 4 - 6
  • [50] Fredholm Multiple Kernel Learning for Semi-Supervised Domain Adaptation
    Wang, Wei
    Wang, Hao
    Zhang, Chen
    Gao, Yang
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2732 - 2738