Edge Sensitive Unsupervised Image-to-Image Translation

被引:0
作者
Akkaya, Ibrahim Batuhan [1 ,2 ]
Halici, Ugur [2 ,3 ]
机构
[1] Aselsan Inc, Res Ctr, Ankara, Turkey
[2] Middle East Tech Univ, Dept Elect & Elect Engn, Ankara, Turkey
[3] NOROM Neurosci & Neurotechnol Excellency Ctr, Ankara, Turkey
来源
2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU) | 2020年
关键词
Generative adversarial networks; image-to-image translation; domain adaptation; image processing;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The goal of unsupervised image-to-image translation (IIT) is to learn a mapping from the source domain to the target domain without using paired image sets. Most of the current IIT methods apply adversarial training to match the distribution of the translated images to the distribution of the target images. However, this may create artifacts in uniform areas of the source image if two domains have different background distribution. In this work, we propose an unsupervised IIT method that preserves the uniform background information of the source images. The edge information which is calculated by Sobel operator is utilized for reducing the artifacts. The edge-preserving loss function, namely Sobel loss is introduced to achieve this goal which is defined as the L2 norm between the Sobel responses of the original and the translated images. The proposed method is validated on the jellyfish-to-Haeckel dataset. The dataset is prepared to demonstrate the mentioned problem which contains images with different uniform background distributions. Our method obtained a clear performance gain compared to the baseline method, showing the effectiveness of the Sobel loss.
引用
收藏
页数:4
相关论文
共 50 条
  • [41] Generative image completion with image-to-image translation
    Xu, Shuzhen
    Zhu, Qing
    Wang, Jin
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (11) : 7333 - 7345
  • [42] DuCaGAN: Unified Dual Capsule Generative Adversarial Network for Unsupervised Image-to-Image Translation
    Shao, Guifang
    Huang, Meng
    Gao, Fengqiang
    Liu, Tundong
    Li, Liduan
    IEEE ACCESS, 2020, 8 : 154691 - 154707
  • [43] Describe What to Change: A Text-guided Unsupervised Image-to-Image Translation Approach
    Liu, Yahui
    De Nadai, Marco
    Cai, Deng
    Li, Huayang
    Alameda-Pineda, Xavier
    Sebe, Nicu
    Lepri, Bruno
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1357 - 1365
  • [44] Multi-cropping contrastive learning and domain consistency for unsupervised image-to-image translation
    Zhao, Chen
    Cai, Wei-Ling
    Yuan, Zheng
    Hu, Cheng-Wei
    IET IMAGE PROCESSING, 2025, 19 (01)
  • [45] Illustrated character face super-deformation via unsupervised image-to-image translation
    Tomoya Sawada
    Marie Katsurai
    Multimedia Systems, 2024, 30
  • [46] Illustrated character face super-deformation via unsupervised image-to-image translation
    Sawada, Tomoya
    Katsurai, Marie
    MULTIMEDIA SYSTEMS, 2024, 30 (02)
  • [47] SCSP: An Unsupervised Image-to-Image Translation Network Based on Semantic Cooperative Shape Perception
    Yang, Xi
    Wang, Zihan
    Wei, Ziyu
    Yang, Dong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 4950 - 4960
  • [48] Vector Quantized Image-to-Image Translation
    Chen, Yu-Jie
    Cheng, Shin-I
    Chiu, Wei-Chen
    Tseng, Hung-Yu
    Lee, Hsin-Ying
    COMPUTER VISION - ECCV 2022, PT XVI, 2022, 13676 : 440 - 456
  • [49] Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation
    Gao, Fei
    Xu, Xingxin
    Yu, Jun
    Shang, Meimei
    Li, Xiang
    Tao, Dacheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 3487 - 3498
  • [50] CoPrGAN: Image-to-Image Translation via Content Preservation
    Yu, Xiaoming
    Zhou, Gan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT III, 2022, 13531 : 37 - 49