Learning an Invariant Hilbert Space for Domain Adaptation

被引:66
作者
Herath, Samitha [1 ,2 ]
Harandi, Mehrtash [1 ,2 ]
Porikli, Fatih [1 ]
机构
[1] Australian Natl Univ, Canberra, ACT, Australia
[2] CSIRO, DATA61, Canberra, ACT, Australia
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
关键词
RECOGNITION; KERNEL;
D O I
10.1109/CVPR.2017.421
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a learning scheme to construct a Hilbert space (i.e., a vector space along its inner product) to address both unsupervised and semi-supervised domain adaptation problems. This is achieved by learning projections from each domain to a latent space along the Mahalanobis metric of the latent space to simultaneously minimizing a notion of domain variance while maximizing a measure of discriminatory power. In particular, we make use of the Riemannian optimization techniques to match statistical properties (e.g., first and second order statistics) between samples projected into the latent space from different domains. Upon availability of class labels, we further deem samples sharing the same label to form more compact clusters while pulling away samples coming from different classes. We extensively evaluate and contrast our proposal against state-of-the-art methods for the task of visual domain adaptation using both handcrafted and deep-net features. Our experiments show that even with a simple nearest neighbor classifier, the proposed method can outperform several state-of-the-art methods benefiting from more involved classification schemes.
引用
收藏
页码:3956 / 3965
页数:10
相关论文
共 50 条
  • [1] Domain Invariant and Class Discriminative Feature Learning for Visual Domain Adaptation
    Li, Shuang
    Song, Shiji
    Huang, Gao
    Ding, Zhengming
    Wu, Cheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (09) : 4260 - 4273
  • [2] Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?
    Stojanov, Petar
    Li, Zijian
    Gong, Mingming
    Cai, Ruichu
    Carbonell, Jaime G.
    Zhang, Kun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Discriminative Invariant Alignment for Unsupervised Domain Adaptation
    Lu, Yuwu
    Li, Desheng
    Wang, Wenjing
    Lai, Zhihui
    Zhou, Jie
    Li, Xuelong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1871 - 1882
  • [4] Domain Invariant and Agnostic Adaptation
    Chen, Sentao
    Wu, Hanrui
    Liu, Cheng
    KNOWLEDGE-BASED SYSTEMS, 2021, 227
  • [5] CentriForce: Multiple-Domain Adaptation for Domain-Invariant Speaker Representation Learning
    Wei, Yuheng
    Du, Junzhao
    Liu, Hui
    Zhang, Zhipeng
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 807 - 811
  • [6] Domain Adaptation by Joint Distribution Invariant Projections
    Chen, Sentao
    Harandi, Mehrtash
    Jin, Xiaona
    Yang, Xiaowei
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 8264 - 8277
  • [7] Guide Subspace Learning for Unsupervised Domain Adaptation
    Zhang, Lei
    Fu, Jingru
    Wang, Shanshan
    Zhang, David
    Dong, Zhaoyang
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (09) : 3374 - 3388
  • [8] Domain adaptation based on domain-invariant and class-distinguishable feature learning using multiple adversarial networks
    Fan, Cangning
    Liu, Peng
    Xiao, Ting
    Zhao, Wei
    Tang, Xianglong
    NEUROCOMPUTING, 2020, 411 : 178 - 192
  • [9] Domain-invariant Graph for Adaptive Semi-supervised Domain Adaptation
    Li, Jinfeng
    Liu, Weifeng
    Zhou, Yicong
    Yu, Jun
    Tao, Dapeng
    Xu, Changsheng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (03)
  • [10] Domain Adversarial Reinforcement Learning for Partial Domain Adaptation
    Chen, Jin
    Wu, Xinxiao
    Duan, Lixin
    Gao, Shenghua
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (02) : 539 - 553