Image style transfer with saliency constrained and SIFT feature fusion

被引:0
作者
Sun, Yaqi [1 ,3 ]
Xie, Xiaolan [1 ,2 ]
Li, Zhi [1 ]
Zhao, Huihuang [3 ]
机构
[1] Guangxi Normal Univ, Sch Comp Sci & Engn, Guilin, Guangxi, Peoples R China
[2] Guilin Univ Technol, Sch Informat Sci & Engn, Guilin, Guangxi, Peoples R China
[3] Hengyang Normal Univ, Sch Comp Sci & Technol, Hengyang, Peoples R China
基金
中国国家自然科学基金;
关键词
Image style transfer; Patch matching; Saliency feature constraint; Feature fusion; Scale-invariant feature transform;
D O I
10.1007/s00371-024-03698-4
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
This article develops a novel image style transfer method that transforms input images using a neural network (NN) model. Common neural style transfer techniques often struggle to fully transmit the texture and color from the style image to the target image (content image), or they may introduce some visible errors. To mitigate these issues, this article proposes a new significance constraint method. Initially, existing significance detection methods are evaluated to select the most appropriate one for our approach. The selected saliency map feature is utilized to detect objects in the style image that correspond to objects with the same saliency map feature in the content image. Furthermore, to address the challenges posed by different sizes or resolutions of style and content images, scale-invariant feature transformations are employed to generate a variety of attribute images. These images are then used to create more feature maps which can be used for patch matching. Consequently, a novel loss function is proposed by associated saliency feature loss, style loss, and content loss. This function also incorporates the gradient of saliency feature constraints into style transfer iterations. At last, the input images and saliency map feature results are employed as multi-channel inputs for the improved deep convolutional neural network (CNN) model for style transfer. Many experimental results demonstrate that the saliency feature map of the source image aids in finding the correct match and avoiding artifacts. Tests on different types of images also show that the proposed method generate better results than other representative methods recently published and deliver superior performance.
引用
收藏
页码:4915 / 4930
页数:16
相关论文
共 45 条
  • [1] SThy-Net: a feature fusion-enhanced dense-branched modules network for small thyroid nodule classification from ultrasound images
    Al-Jebrni, Abdulrhman H.
    Ali, Saba Ghazanfar
    Li, Huating
    Lin, Xiao
    Li, Ping
    Jung, Younhyun
    Kim, Jinman
    Feng, David Dagan
    Sheng, Bin
    Jiang, Lixin
    Du, Jing
    [J]. VISUAL COMPUTER, 2023, 39 (08) : 3675 - 3689
  • [2] Perceptual loss guided Generative adversarial network for saliency detection
    Cai, Xiaoxu
    Wang, Gaige
    Lou, Jianwen
    Jian, Muwei
    Dong, Junyu
    Chen, Rung-Ching
    Stevens, Brett
    Yu, Hui
    [J]. INFORMATION SCIENCES, 2024, 654
  • [3] Gated-GAN: Adversarial Gated Networks for Multi-Collection Style Transfer
    Chen, Xinyuan
    Xu, Chang
    Yang, Xiaokang
    Song, Li
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (02) : 546 - 560
  • [4] Chen Yingshu, 2022, 2022 IEEE INT C COMP, P1
  • [5] Global Contrast Based Salient Region Detection
    Cheng, Ming-Ming
    Mitra, Niloy J.
    Huang, Xiaolei
    Torr, Philip H. S.
    Hu, Shi-Min
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (03) : 569 - 582
  • [6] StyTr2: Image Style Transfer with Transformers
    Deng, Yingying
    Tang, Fan
    Dong, Weiming
    Ma, Chongyang
    Pan, Xingjia
    Wang, Lei
    Xu, Changsheng
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11316 - 11326
  • [7] Robust zero-watermarking algorithm for medical images based on SIFT and Bandelet-DCT
    Fang, Yangxiu
    Liu, Jing
    Li, Jingbing
    Cheng, Jieren
    Hu, Jiabin
    Yi, Dan
    Xiao, Xiliang
    Bhatti, Uzair Aslam
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (12) : 16863 - 16879
  • [8] Cluster-Based Co-Saliency Detection
    Fu, Huazhu
    Cao, Xiaochun
    Tu, Zhuowen
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (10) : 3766 - 3778
  • [9] Gao W, 2020, IEEE WINT CONF APPL, P3211, DOI 10.1109/WACV45572.2020.9093420
  • [10] Controlling Perceptual Factors in Neural Style Transfer
    Gatys, Leon A.
    Ecker, Alexander S.
    Bethge, Matthias
    Hertzmann, Aaron
    Shechtman, Eli
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3730 - 3738