Domain Invariant Transfer Kernel Learning

被引:175
|
作者
Long, Mingsheng [1 ,2 ]
Wang, Jianmin [1 ,3 ]
Sun, Jiaguang [1 ,3 ]
Yu, Philip S. [4 ]
机构
[1] Tsinghua Univ, Sch Software, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Comp Sci, Beijing 100084, Peoples R China
[3] Tsinghua Univ, Tsinghua Natl Lab Informat Sci & Technol, Beijing 100084, Peoples R China
[4] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
美国国家科学基金会;
关键词
Transfer learning; kernel learning; Nystrom method; text mining; image classification; video recognition; REGULARIZATION; FRAMEWORK;
D O I
10.1109/TKDE.2014.2373376
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain transfer learning generalizes a learning model across training data and testing data with different distributions. A general principle to tackle this problem is reducing the distribution difference between training data and testing data such that the generalization error can be bounded. Current methods typically model the sample distributions in input feature space, which depends on nonlinear feature mapping to embody the distribution discrepancy. However, this nonlinear feature space may not be optimal for the kernel-based learning machines. To this end, we propose a transfer kernel learning (TKL) approach to learn a domain-invariant kernel by directly matching source and target distributions in the reproducing kernel Hilbert space (RKHS). Specifically, we design a family of spectral kernels by extrapolating target eigensystem on source samples with Mercer's theorem. The spectral kernel minimizing the approximation error to the ground truth kernel is selected to construct domain-invariant kernel machines. Comprehensive experimental evidence on a large number of text categorization, image classification, and video event recognition datasets verifies the effectiveness and efficiency of the proposed TKL approach over several state-of-the-art methods.
引用
收藏
页码:1519 / 1532
页数:14
相关论文
共 50 条
  • [31] Kernel Alignment for Unsupervised Transfer Learning
    Redko, Ievgen
    Bennani, Younes
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 525 - 530
  • [32] Kernel methods for transfer learning to avoid negative transfer
    Shao, Hao
    INTERNATIONAL JOURNAL OF COMPUTING SCIENCE AND MATHEMATICS, 2016, 7 (02) : 190 - 199
  • [34] Learning Domain Invariant Word Representations for Parsing Domain Adaptation
    Qiao, Xiuming
    Zhang, Yue
    Zhao, Tiejun
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING (NLPCC 2019), PT I, 2019, 11838 : 801 - 813
  • [35] Invariant models for causal transfer learning
    Rojas-Carulla, Mateo
    Schölkopf, Bernhard
    Turner, Richard
    Peters, Jonas
    Journal of Machine Learning Research, 2018, 19
  • [36] Invariant Models for Causal Transfer Learning
    Rojas-Carulla, Mateo
    Schoelkopf, Bernhard
    Turner, Richard
    Peters, Jonas
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 19
  • [37] Optimal kernel choice for domain adaption learning
    Dong, Le
    Feng, Ning
    Quan, Pinjie
    Kong, Gaipeng
    Chen, Xiuyuan
    Zhang, Qianni
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2016, 51 : 163 - 170
  • [38] Hierarchical Invariant Learning for Domain Generalization Recommendation
    Zhang, Zeyu
    Gao, Heyang
    Yang, Hao
    Chen, Xu
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3470 - 3479
  • [39] Learning an Invariant Hilbert Space for Domain Adaptation
    Herath, Samitha
    Harandi, Mehrtash
    Porikli, Fatih
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3956 - 3965
  • [40] Gradient-aware domain-invariant learning for domain generalization
    Hou, Feng
    Zhang, Yao
    Liu, Yang
    Yuan, Jin
    Zhong, Cheng
    Zhang, Yang
    Shi, Zhongchao
    Fan, Jianping
    He, Zhiqiang
    MULTIMEDIA SYSTEMS, 2025, 31 (01)