Learning an Invariant Hilbert Space for Domain Adaptation

被引:66
|
作者
Herath, Samitha [1 ,2 ]
Harandi, Mehrtash [1 ,2 ]
Porikli, Fatih [1 ]
机构
[1] Australian Natl Univ, Canberra, ACT, Australia
[2] CSIRO, DATA61, Canberra, ACT, Australia
关键词
RECOGNITION; KERNEL;
D O I
10.1109/CVPR.2017.421
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a learning scheme to construct a Hilbert space (i.e., a vector space along its inner product) to address both unsupervised and semi-supervised domain adaptation problems. This is achieved by learning projections from each domain to a latent space along the Mahalanobis metric of the latent space to simultaneously minimizing a notion of domain variance while maximizing a measure of discriminatory power. In particular, we make use of the Riemannian optimization techniques to match statistical properties (e.g., first and second order statistics) between samples projected into the latent space from different domains. Upon availability of class labels, we further deem samples sharing the same label to form more compact clusters while pulling away samples coming from different classes. We extensively evaluate and contrast our proposal against state-of-the-art methods for the task of visual domain adaptation using both handcrafted and deep-net features. Our experiments show that even with a simple nearest neighbor classifier, the proposed method can outperform several state-of-the-art methods benefiting from more involved classification schemes.
引用
收藏
页码:3956 / 3965
页数:10
相关论文
共 50 条
  • [21] ParaUDA: Invariant Feature Learning With Auxiliary Synthetic Samples for Unsupervised Domain Adaptation
    Zhang, Wenwen
    Wang, Jiangong
    Wang, Yutong
    Wang, Fei-Yue
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (11) : 20217 - 20229
  • [22] Domain adaptation using optimal transport for invariant learning using histopathology datasets
    Falahkheirkhah, Kianoush
    Lu, Alex
    Alvarez-Melis, David
    Huynh, Grace
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 227, 2023, 227 : 1765 - 1782
  • [23] Make the U in UDA Matter: Invariant Consistency Learning for Unsupervised Domain Adaptation
    Yue, Zhongqi
    Zhang, Hanwang
    Sun, Qianru
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [24] DIVIDE: Learning a Domain-Invariant Geometric Space for Depth Estimation
    Shim, Dongseok
    Kim, H. Jin
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4663 - 4670
  • [25] Domain Invariant and Class Discriminative Heterogeneous Domain Adaptation
    Wang, Yifan
    Huang, Junchu
    Shang, Junyuan
    Niu, Chang
    Zhou, Zhiheng
    2018 IEEE 3RD INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION SYSTEMS (ICCIS), 2018, : 227 - 231
  • [26] Learning intra-domain style-invariant representation for unsupervised domain adaptation of semantic segmentation
    Li, Zongyao
    Togo, Ren
    Ogawa, Takahiro
    Haseyama, Miki
    PATTERN RECOGNITION, 2022, 132
  • [27] Domain-invariant representation learning using an unsupervised domain adversarial adaptation deep neural network
    Jia, Xibin
    Jin, Ya
    Su, Xing
    Hu, Yongli
    NEUROCOMPUTING, 2019, 355 : 209 - 220
  • [28] Learning emotion-discriminative and domain-invariant features for domain adaptation in speech emotion recognition
    Mao, Qirong
    Xu, Guopeng
    Xue, Wentao
    Gou, Jianping
    Zhan, Yongzhao
    SPEECH COMMUNICATION, 2017, 93 : 1 - 10
  • [29] Conley index of nontrivial invariant sets in a Hilbert space
    Kuznetsov, Yu. O.
    DIFFERENTIAL EQUATIONS, 2008, 44 (02) : 192 - 202