Domain Invariant Transfer Kernel Learning

被引:175
|
作者
Long, Mingsheng [1 ,2 ]
Wang, Jianmin [1 ,3 ]
Sun, Jiaguang [1 ,3 ]
Yu, Philip S. [4 ]
机构
[1] Tsinghua Univ, Sch Software, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Comp Sci, Beijing 100084, Peoples R China
[3] Tsinghua Univ, Tsinghua Natl Lab Informat Sci & Technol, Beijing 100084, Peoples R China
[4] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
美国国家科学基金会;
关键词
Transfer learning; kernel learning; Nystrom method; text mining; image classification; video recognition; REGULARIZATION; FRAMEWORK;
D O I
10.1109/TKDE.2014.2373376
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain transfer learning generalizes a learning model across training data and testing data with different distributions. A general principle to tackle this problem is reducing the distribution difference between training data and testing data such that the generalization error can be bounded. Current methods typically model the sample distributions in input feature space, which depends on nonlinear feature mapping to embody the distribution discrepancy. However, this nonlinear feature space may not be optimal for the kernel-based learning machines. To this end, we propose a transfer kernel learning (TKL) approach to learn a domain-invariant kernel by directly matching source and target distributions in the reproducing kernel Hilbert space (RKHS). Specifically, we design a family of spectral kernels by extrapolating target eigensystem on source samples with Mercer's theorem. The spectral kernel minimizing the approximation error to the ground truth kernel is selected to construct domain-invariant kernel machines. Comprehensive experimental evidence on a large number of text categorization, image classification, and video event recognition datasets verifies the effectiveness and efficiency of the proposed TKL approach over several state-of-the-art methods.
引用
收藏
页码:1519 / 1532
页数:14
相关论文
共 50 条
  • [21] Transfer Learning: Kernel-Based Domain Adaptation with Distance-Based Penalization
    Prakash, Jainendra
    Ghorai, Mrinmoy
    Sanodiya, Rakesh
    PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2023, 2023, 14301 : 189 - 198
  • [22] Domain Invariant Representation Learning with Domain Density Transformations
    Nguyen, A. Tuan
    Tran, Toan
    Gal, Yarin
    Baydin, Atilim Gunes
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [23] Domain-Invariant Feature Learning for Domain Adaptation
    Tu, Ching-Ting
    Lin, Hsiau-Wen
    Lin, Hwei Jen
    Tokuyama, Yoshimasa
    Chu, Chia-Hung
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2023, 37 (03)
  • [24] On Learning Invariant Representations for Domain Adaptation
    Zhao, Han
    des Combes, Remi Tachet
    Zhang, Kun
    Gordon, Geoffrey J.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [25] Invariant kernel functions for pattern analysis and machine learning
    Haasdonk, Bernard
    Burkhardt, Hans
    MACHINE LEARNING, 2007, 68 (01) : 35 - 61
  • [26] Invariant kernel functions for pattern analysis and machine learning
    Bernard Haasdonk
    Hans Burkhardt
    Machine Learning, 2007, 68 : 35 - 61
  • [27] Multitask transfer learning with kernel representation
    Yulu Zhang
    Shihui Ying
    Zhijie Wen
    Neural Computing and Applications, 2022, 34 : 12709 - 12721
  • [28] Multitask transfer learning with kernel representation
    Zhang, Yulu
    Ying, Shihui
    Wen, Zhijie
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (15): : 12709 - 12721
  • [29] Kernel Fisher Dictionary Transfer Learning
    Shi, Linrui
    Zhang, Zheng
    Fan, Zizhu
    Xi, Chao
    Li, Zhengming
    Wu, Gaochang
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2023, 17 (08)
  • [30] Kernel Extreme Learning Machine with Discriminative Transfer Feature and Instance Selection for Unsupervised Domain Adaptation
    Zang, Shaofei
    Li, Huimin
    Lu, Nannan
    Ma, Chao
    Gao, Jiwei
    Ma, Jianwei
    Lv, Jinfeng
    NEURAL PROCESSING LETTERS, 2024, 56 (04)