Representation learning via an integrated autoencoder for unsupervised domain adaptation

被引:15
|
作者
Zhu, Yi [1 ,2 ,3 ]
Wu, Xindong [2 ,3 ]
Qiang, Jipeng [1 ]
Yuan, Yunhao [1 ]
Li, Yun [1 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou 225127, Peoples R China
[2] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Minist Educ China, Hefei 230009, Peoples R China
[3] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230601, Peoples R China
基金
中国国家自然科学基金;
关键词
unsupervised domain adaptation; representation learning; marginalized autoencoder; convolutional autoencoder; sparse autoencoder; NETWORK;
D O I
10.1007/s11704-022-1349-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The purpose of unsupervised domain adaptation is to use the knowledge of the source domain whose data distribution is different from that of the target domain for promoting the learning task in the target domain. The key bottleneck in unsupervised domain adaptation is how to obtain higher-level and more abstract feature representations between source and target domains which can bridge the chasm of domain discrepancy. Recently, deep learning methods based on autoencoder have achieved sound performance in representation learning, and many dual or serial autoencoder-based methods take different characteristics of data into consideration for improving the effectiveness of unsupervised domain adaptation. However, most existing methods of autoencoders just serially connect the features generated by different autoencoders, which pose challenges for the discriminative representation learning and fail to find the real cross-domain features. To address this problem, we propose a novel representation learning method based on an integrated autoencoders for unsupervised domain adaptation, called IAUDA. To capture the inter-and inner-domain features of the raw data, two different autoencoders, which are the marginalized autoencoder with maximum mean discrepancy (mAEMMD) and convolutional autoencoder (CAE) respectively, are proposed to learn different feature representations. After higher-level features are obtained by these two different autoencoders, a sparse autoencoder is introduced to compact these inter-and inner-domain representations. In addition, a whitening layer is embedded for features processed before the mAE to reduce redundant features inside a local area. MMD Experimental results demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Representation learning via serial robust autoencoder for domain adaptation
    Yang, Shuai
    Zhang, Yuhong
    Wang, Hao
    Li, Peipei
    Hu, Xuegang
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 160
  • [2] Unsupervised domain adaptation via representation learning and adaptive classifier learning
    Gheisari, Marzieh
    Baghshah, Mandieh Soleymani
    NEUROCOMPUTING, 2015, 165 : 300 - 311
  • [3] Unsupervised Domain Adaptation in the Wild via Disentangling Representation Learning
    Haoliang Li
    Renjie Wan
    Shiqi Wang
    Alex C. Kot
    International Journal of Computer Vision, 2021, 129 : 267 - 283
  • [4] Unsupervised Domain Adaptation in the Wild via Disentangling Representation Learning
    Li, Haoliang
    Wan, Renjie
    Wang, Shiqi
    Kot, Alex C.
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (02) : 267 - 283
  • [5] Representation learning for unsupervised domain adaptation
    Xu Y.
    Yan H.
    Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2021, 53 (02): : 40 - 46
  • [6] Unsupervised Domain Adaptation via Stacked Convolutional Autoencoder
    Zhu, Yi
    Zhou, Xinke
    Wu, Xindong
    APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [7] Learning Smooth Representation for Unsupervised Domain Adaptation
    Cai, Guanyu
    He, Lianghua
    Zhou, MengChu
    Alhumade, Hesham
    Hu, Die
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4181 - 4195
  • [8] Disentangled Representation Learning with Causality for Unsupervised Domain Adaptation
    Wang, Shanshan
    Chen, Yiyang
    He, Zhenwei
    Yang, Xun
    Wang, Mengzhu
    You, Quanzeng
    Zhang, Xingyi
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2918 - 2926
  • [9] Joint metric and feature representation learning for unsupervised domain adaptation
    Xie, Yue
    Du, Zhekai
    Li, Jingjing
    Jing, Mengmeng
    Chen, Erpeng
    Lu, Ke
    KNOWLEDGE-BASED SYSTEMS, 2020, 192
  • [10] Unsupervised domain adaptation with Joint Adversarial Variational AutoEncoder
    Li, Yuze
    Zhang, Yan
    Yang, Chunling
    KNOWLEDGE-BASED SYSTEMS, 2022, 250