EM-LAST: Effective Multidimensional Latent Space Transport for an Unpaired Image-to-Image Translation With an Energy-Based Model

被引:1
|
作者
Han, Giwoong [1 ]
Min, Jinhong [1 ]
Han, Sung Won [1 ]
机构
[1] Korea Univ, Sch Ind & Management Engn, Seoul 02841, South Korea
关键词
Task analysis; Aerospace electronics; Visualization; Licenses; Generative adversarial networks; Deep learning; Decoding; Energy-based model; image-to-image translation; Langevin dynamics; multidimensional latent space; vector-quantized variational autoencoder;
D O I
10.1109/ACCESS.2022.3189352
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For an unpaired image-to-image translation to work effectively, the latent space of each image domain must be well-designed. The codes of each style must be translated toward the target while preserving the parts corresponding to the source content. In general, most Variational Autoencoder (VAE)-based models use a one-dimensional latent space. However, to apply high dimensional methodologies such as vector quantization, controlling a multidimensional latent space is necessary. In this study, among the VAE-based models that use relatively complex multidimensional latent spaces, we apply an Energy-Based Model and Vector-Quantized VAE v2, with the latter as the main model. We show that among the latent spaces that represent each image domain, the importance of each feature at the top and bottom latent spaces must be interpreted differently for appropriate translation. Therefore, we argue that simply understanding the features of latent space composition well can show effective image translation results. We also present various analyses and visual outcomes of multidimensional latent space transport.
引用
收藏
页码:72839 / 72849
页数:11
相关论文
共 18 条
  • [1] Unpaired Image-to-Image Translation via Latent Energy Transport
    Zhao, Yang
    Chen, Changyou
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16413 - 16422
  • [2] Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation
    Lin, Jianxin
    Chen, Zhibo
    Xia, Yingce
    Liu, Sen
    Qin, Tao
    Luo, Jiebo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (04) : 1254 - 1266
  • [3] UNPAIRED IMAGE-TO-IMAGE TRANSLATION FROM SHARED DEEP SPACE
    Wu, Xuehui
    Shao, Jie
    Gao, Lianli
    Shen, Heng Tao
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2127 - 2131
  • [4] Multidomain image-to-image translation model based on hidden space sharing
    Ding Yuxin
    Wang Longfei
    Neural Computing and Applications, 2022, 34 : 283 - 298
  • [5] Multidomain image-to-image translation model based on hidden space sharing
    Ding, Yuxin
    Wang, Longfei
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (01) : 283 - 298
  • [6] UNPAIRED IMAGE-TO-IMAGE TRANSLATION BASED DOMAIN ADAPTATION FOR POLYP SEGMENTATION
    Xiong, Xinyu
    Li, Siying
    Li, Guanbin
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [7] Improving Generative Adversarial Networks for Patch-Based Unpaired Image-to-Image Translation
    Boehland, Moritz
    Bruch, Roman
    Baeuerle, Simon
    Rettenberger, Luca
    Reischl, Markus
    IEEE ACCESS, 2023, 11 : 127895 - 127906
  • [8] Disentangling latent space better for few-shot image-to-image translation
    Liu, Peng
    Wang, Yueyue
    Du, Angang
    Zhang, Liqiang
    Wei, Bin
    Gu, Zhaorui
    Wang, Xiaodong
    Zheng, Haiyong
    Li, Juan
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (02) : 419 - 427
  • [9] Unsupervised natural scene image-to-image translation via independence-enhanced latent space
    Zhigang Yang
    Lin Hou
    Ye Bai
    Siyuan Cang
    Yuanlan Yang
    Tao Chen
    Signal, Image and Video Processing, 2025, 19 (8)
  • [10] Disentangling latent space better for few-shot image-to-image translation
    Peng Liu
    Yueyue Wang
    Angang Du
    Liqiang Zhang
    Bin Wei
    Zhaorui Gu
    Xiaodong Wang
    Haiyong Zheng
    Juan Li
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 419 - 427