Invertible Autoencoder for Domain Adaptation

被引:7
|
作者
Teng, Yunfei [1 ]
Choromanska, Anna [1 ]
机构
[1] NYU, Tandon Sch Engn, Dept Elect & Comp Engn, 5 MetroTech Ctr, Brooklyn, NY 11201 USA
关键词
image-to-image translation; autoencoder; invertible autoencoder;
D O I
10.3390/computation7020020
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB:A -> B and FBA:B -> A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., FAB(FBA(B))approximate to B and FBA(FAB(A))approximate to A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Domain Adaptation Network Based on Autoencoder
    WANG Xuesong
    MA Yuting
    CHENG Yuhu
    ChineseJournalofElectronics, 2018, 27 (06) : 1258 - 1264
  • [2] Domain Adaptation Network Based on Autoencoder
    Wang Xuesong
    Ma Yuting
    Cheng Yuhu
    CHINESE JOURNAL OF ELECTRONICS, 2018, 27 (06) : 1258 - 1264
  • [3] Heterogeneous domain adaptation network based on autoencoder
    Wang, Xuesong
    Ma, Yuting
    Cheng, Yuhu
    Zou, Liang
    Rodrigues, Joel J. P. C.
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2018, 117 : 281 - 291
  • [4] Unsupervised Domain Adaptation for EM Image Denoising With Invertible Networks
    Deng, Shiyu
    Chen, Yinda
    Huang, Wei
    Zhang, Ruobing
    Xiong, Zhiwei
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2025, 44 (01) : 92 - 105
  • [5] Deeply Coupled Graph Structured Autoencoder for Domain Adaptation
    Majumdar, Angshul
    PROCEEDINGS OF THE 6TH ACM IKDD CODS AND 24TH COMAD, 2019, : 94 - 102
  • [6] Dual-Representation-Based Autoencoder for Domain Adaptation
    Yang, Shuai
    Yu, Kui
    Cao, Fuyuan
    Wang, Hao
    Wu, Xindong
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (08) : 7464 - 7477
  • [7] Unsupervised domain adaptation with Joint Adversarial Variational AutoEncoder
    Li, Yuze
    Zhang, Yan
    Yang, Chunling
    KNOWLEDGE-BASED SYSTEMS, 2022, 250
  • [8] Unsupervised Domain Adaptation via Stacked Convolutional Autoencoder
    Zhu, Yi
    Zhou, Xinke
    Wu, Xindong
    APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [9] Deep autoencoder based domain adaptation for transfer learning
    Dev, Krishna
    Ashraf, Zubair
    Muhuri, Pranab K.
    Kumar, Sandeep
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (16) : 22379 - 22405
  • [10] Deep autoencoder based domain adaptation for transfer learning
    Krishna Dev
    Zubair Ashraf
    Pranab K. Muhuri
    Sandeep Kumar
    Multimedia Tools and Applications, 2022, 81 : 22379 - 22405