Arbitrary Style Transfer with Parallel Self-Attention

被引:3
|
作者
Zhang, Tiange [1 ]
Gao, Ying [1 ]
Gao, Feng [1 ]
Qi, Lin [1 ]
Dong, Junyu [1 ]
机构
[1] Ocean Univ China, Sch Informat Sci & Engn, Qingdao 266100, Peoples R China
基金
中国国家自然科学基金;
关键词
style transfer; attention mechanism; instance normalization; laplacian matrix;
D O I
10.1109/ICPR48806.2021.9412049
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural style transfer aims to create artistic images by synthesizing patterns from a given style image. Recently, the Adaptive Instance Normalization (AdaIN) layer is proposed to achieve real-time arbitrary style transfer. However, we observed that if crucial features based on AdaIN can be further emphasized during transfer, both content and style information will be better reflected in stylized images. Furthermore, it is always essential to preserve more details and reduce unexpected artifacts in order to generate appealing results. In this paper, we introduce an improved arbitrary style transfer method based on the self-attention mechanism. A self-attention module is designed to learn what and where to emphasize in the input image. In addition, an extra Laplacian loss is applied to preserve structure details of the content while eliminating artifacts. Experimental results demonstrate that the proposed method outperforms AdaIN and can generate more appealing results.
引用
收藏
页码:1406 / 1413
页数:8
相关论文
共 50 条
  • [1] Consistent Arbitrary Style Transfer Using Consistency Training and Self-Attention Module
    Zhou, Zheng
    Wu, Yue
    Zhou, Yicong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16845 - 16856
  • [2] Consistent Arbitrary Style Transfer Using Consistency Training and Self-Attention Module
    Zhou, Zheng
    Wu, Yue
    Zhou, Yicong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16845 - 16856
  • [3] Adversarial Training Inspired Self-attention Flow for Universal Image Style Transfer
    Dang, Kaiheng
    Lai, Jianhuang
    Dong, Junhao
    Xie, Xiaohua
    PATTERN RECOGNITION, ACPR 2021, PT II, 2022, 13189 : 476 - 489
  • [4] FST-OAM: a fast style transfer model using optimized self-attention mechanism
    Du, Xiaozhi
    Jia, Ning
    Du, Hongyuan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (05) : 4191 - 4203
  • [5] All-to-key Attention for Arbitrary Style Transfer
    Zhu, Mingrui
    He, Xiao
    Wang, Nannan
    Wang, Xiaoyu
    Gao, Xinbo
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 23052 - 23062
  • [6] An efficient parallel self-attention transformer for CSI feedback
    Liu, Ziang
    Song, Tianyu
    Zhao, Ruohan
    Jin, Jiyu
    Jin, Guiyue
    PHYSICAL COMMUNICATION, 2024, 66
  • [7] On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention
    Lee, Junyeop
    Park, Sungrae
    Baek, Jeonghun
    Oh, Seong Joon
    Kim, Seonghyeon
    Lee, Hwalsuk
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 2326 - 2335
  • [8] Self-attention transfer networks for speech emotion recognition
    Ziping ZHAO
    Keru Wang
    Zhongtian BAO
    Zixing ZHANG
    Nicholas CUMMINS
    Shihuang SUN
    Haishuai WANG
    Jianhua TAO
    Bj?rn W.SCHULLER
    虚拟现实与智能硬件(中英文), 2021, 3 (01) : 43 - 54
  • [9] AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer
    Liu, Songhua
    Lin, Tianwei
    He, Dongliang
    Li, Fu
    Wang, Meiling
    Li, Xin
    Sun, Zhengxing
    Li, Qian
    Ding, Errui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6629 - 6638
  • [10] Arbitrary style transfer based on Attention and Covariance-Matching
    Peng, Haiyuan
    Qian, Wenhua
    Cao, Jinde
    Tang, Shan
    COMPUTERS & GRAPHICS-UK, 2023, 116 : 298 - 307