Latent Style: multi-style image transfer via latent style coding and skip connection

被引:1
|
作者
Hu, Jingfei [1 ,2 ,3 ,4 ,5 ]
Wu, Guang [2 ]
Wang, Hua [1 ,2 ,3 ,4 ,5 ]
Zhang, Jicong [1 ,2 ,3 ,4 ,5 ]
机构
[1] Beihang Univ, Sch Biol Sci & Med Engn, Beijing, Peoples R China
[2] Beihang Univ, Hefei Innovat Res Inst, Hefei, Peoples R China
[3] Beihang Univ, Beijing Adv Innovat Ctr Biomed Engn, Beijing, Peoples R China
[4] Beihang Univ, Beijing Adv Innovat Ctr Big Data Based Precis Med, Beijing, Peoples R China
[5] Anhui Med Univ, Sch Biomed Engn, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
Generative adversarial network; Multimodal unsupervised image-to-image translation (MUNIT); Skip connection; Image-to-image translation;
D O I
10.1007/s11760-021-01940-3
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unsupervised multi-style image translation is an important and challenging study in the task of image translation. The translation relations between interrelated images should be analyzed from multiple angles as these relations are not merely unidirectional or based on a single factor. Multi-style image translation algorithms have recently emerged to establish a multifaceted relationship between coupled images and interpret their features, which can fully express the content and semantic information of these images. One key algorithm, the multimodal unsupervised image-to-image translation (MUNIT), achieves reasonable unsupervised translation, but its image style representation is random noise, which leads to suboptimal multi-style representation. In order to achieve better multi-style image translation, we propose an improved MUNIT scheme equipped with style coding, skip connection, and a self-attention mechanism. The proposed scheme pays more attention to image style coding as well as the global and detailed image information. Through extensive experimental comparisons with state-of-the-art methods on various image translation tasks, the advantages of this scheme are demonstrated qualitatively and quantitatively.
引用
收藏
页码:359 / 368
页数:10
相关论文
共 50 条
  • [1] Latent Style: multi-style image transfer via latent style coding and skip connection
    Jingfei Hu
    Guang Wu
    Hua Wang
    Jicong Zhang
    Signal, Image and Video Processing, 2022, 16 : 359 - 368
  • [2] Image Style Transfer via Multi-Style Geometry Warping
    Alexandru, Ioana
    Nicula, Constantin
    Prodan, Cristian
    Rotaru, Razvan-Paul
    Tarba, Nicolae
    Boiangiu, Costin-Anton
    APPLIED SCIENCES-BASEL, 2022, 12 (12):
  • [3] Text Style Transfer via Learning Style Instance Supported Latent Space
    Yi, Xiaoyuan
    Liu, Zhenghao
    Li, Wenhao
    Sun, Maosong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3801 - 3807
  • [4] Fast Video Multi-Style Transfer
    Gao, Wei
    Lie, Yijun
    Yin, Yihang
    Yang, Ming-Hsuan
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 3211 - 3219
  • [5] Interactive Artistic Multi-style Transfer
    Wang, Xiaohui
    Lyu, Yiran
    Huang, Junfeng
    Wang, Ziying
    Qin, Jingyan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2021, 14 (01)
  • [6] Interactive Artistic Multi-style Transfer
    Xiaohui Wang
    Yiran Lyu
    Junfeng Huang
    Ziying Wang
    Jingyan Qin
    International Journal of Computational Intelligence Systems, 14
  • [7] Style-Aware Contrastive Learning for Multi-Style Image Captioning
    Zhou, Yucheng
    Long, Guodong
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2257 - 2267
  • [8] Style Mixer: Semantic-aware Multi-Style Transfer Network
    Huang, Zixuan
    Zhang, Jinghuai
    Liao, Jing
    COMPUTER GRAPHICS FORUM, 2019, 38 (07) : 469 - 480
  • [9] Multi-style image transfer system using conditional cycleGAN
    Tu, Ching-Ting
    Lin, Hwei Jen
    Tsia, Yihjia
    IMAGING SCIENCE JOURNAL, 2021, 69 (1-4): : 1 - 14
  • [10] Text Style Transfer: Leveraging a Style Classifier on Entangled Latent Representations
    Li, Xiaoyan
    Sun, Sun
    Wang, Yunli
    REPL4NLP 2021: PROCEEDINGS OF THE 6TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP, 2021, : 72 - 82