Disentanglement then reconstruction: Unsupervised domain adaptation by twice distribution alignments

被引:11
作者
Zhou, Lihua [1 ]
Ye, Mao [1 ]
Li, Xinpeng [1 ]
Zhu, Ce [2 ]
Liu, Yiguang [3 ]
Li, Xue [4 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[3] Sichuan Univ, Sch Comp Sci, Vis & Image Proc Lab, Chengdu 610065, Peoples R China
[4] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld 4072, Australia
基金
中国国家自然科学基金;
关键词
Unsupervised domain adaptation; Disentanglement; Prototypes; Compact features;
D O I
10.1016/j.eswa.2023.121498
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation aims to transfer knowledge from labeled source domain to unlabeled target domain. Traditional methods usually achieve domain adaptation by aligning the distributions between two domains once. We propose to align the distributions twice by a disentanglement and reconstruction process, named DTR (Disentanglement Then Reconstruction). Specifically, a feature extraction network shared by both source and target domains is used to obtain the original extracted features, then the domain invariant features and domain specific features are disentangled from the original extracted features. The domain distributions are explicitly aligned when disentangling domain invariant features. Based on the disentangled features, the class prototypes and domain prototypes can be estimated. Then, a reconstructor is trained by the disentangled features. By this reconstructor, we can construct prototypes in the original feature space using the corresponding class prototype and domain prototype similarly. The extracted features are forced to close the corresponding constructed prototypes. In this process, the distribution between two domains is implicitly aligned again. Experiment results on several public datasets confirm the effectiveness of our method.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Deep Unsupervised Convolutional Domain Adaptation
    Zhuo, Junbao
    Wang, Shuhui
    Zhang, Weigang
    Huang, Qingming
    PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 261 - 269
  • [32] Unsupervised Domain Adaptation with Regularized Domain Instance Denoising
    Csurka, Gabriela
    Chidlowskii, Boris
    Clinchant, Stephane
    Michel, Sophia
    COMPUTER VISION - ECCV 2016 WORKSHOPS, PT III, 2016, 9915 : 458 - 466
  • [33] Unsupervised domain adaptation for speech recognition with unsupervised error correction
    Mai, Long
    Carson-Berndsen, Julie
    INTERSPEECH 2022, 2022, : 5120 - 5124
  • [34] Unsupervised Domain Adaptation Based on Pseudo-Label Confidence
    Fu, Tingting
    Li, Ying
    IEEE ACCESS, 2021, 9 : 87049 - 87057
  • [35] Unsupervised domain adaptation with post-adaptation labeled domain performance preservation
    Badr, Haidi
    Wanas, Nayer
    Fayek, Magda
    MACHINE LEARNING WITH APPLICATIONS, 2022, 10
  • [36] Multi-View Prototypical Transport for Unsupervised Domain Adaptation
    Lee, Sunhyeok
    Kim, Dae-Shik
    IEEE ACCESS, 2025, 13 : 8482 - 8494
  • [37] Unsupervised Domain Adaptation for Object Detection Using Distribution Matching in Various Feature Level
    Park, Hyoungwoo
    Ju, Minjeong
    Moon, Sangkeun
    Yoo, Chang D.
    DIGITAL FORENSICS AND WATERMARKING, IWDW 2018, 2019, 11378 : 363 - 372
  • [38] Class-Aware Distribution Alignment based Unsupervised Domain Adaptation for Speaker Verification
    Hu, Hang-Rui
    Song, Yan
    Dai, Li-Rong
    McLoughlin, Ian
    Liu, Lin
    INTERSPEECH 2022, 2022, : 3689 - 3693
  • [39] Joint Feature and Labeling Function Adaptation for Unsupervised Domain Adaptation
    Cui, Fengli
    Chen, Yinghao
    Du, Yuntao
    Cao, Yikang
    Wang, Chongjun
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT I, 2022, 13280 : 432 - 446
  • [40] Cross-Domain Contrastive Learning for Unsupervised Domain Adaptation
    Wang, Rui
    Wu, Zuxuan
    Weng, Zejia
    Chen, Jingjing
    Qi, Guo-Jun
    Jiang, Yu-Gang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 1665 - 1673