Arbitrary Style Transfer with Parallel Self-Attention

被引:3
|
作者
Zhang, Tiange [1 ]
Gao, Ying [1 ]
Gao, Feng [1 ]
Qi, Lin [1 ]
Dong, Junyu [1 ]
机构
[1] Ocean Univ China, Sch Informat Sci & Engn, Qingdao 266100, Peoples R China
基金
中国国家自然科学基金;
关键词
style transfer; attention mechanism; instance normalization; laplacian matrix;
D O I
10.1109/ICPR48806.2021.9412049
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural style transfer aims to create artistic images by synthesizing patterns from a given style image. Recently, the Adaptive Instance Normalization (AdaIN) layer is proposed to achieve real-time arbitrary style transfer. However, we observed that if crucial features based on AdaIN can be further emphasized during transfer, both content and style information will be better reflected in stylized images. Furthermore, it is always essential to preserve more details and reduce unexpected artifacts in order to generate appealing results. In this paper, we introduce an improved arbitrary style transfer method based on the self-attention mechanism. A self-attention module is designed to learn what and where to emphasize in the input image. In addition, an extra Laplacian loss is applied to preserve structure details of the content while eliminating artifacts. Experimental results demonstrate that the proposed method outperforms AdaIN and can generate more appealing results.
引用
收藏
页码:1406 / 1413
页数:8
相关论文
共 50 条
  • [41] Research of Self-Attention in Image Segmentation
    Cao, Fude
    Zheng, Chunguang
    Huang, Limin
    Wang, Aihua
    Zhang, Jiong
    Zhou, Feng
    Ju, Haoxue
    Guo, Haitao
    Du, Yuxia
    JOURNAL OF INFORMATION TECHNOLOGY RESEARCH, 2022, 15 (01)
  • [42] Improve Image Captioning by Self-attention
    Li, Zhenru
    Li, Yaoyi
    Lu, Hongtao
    NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 91 - 98
  • [43] Self-Attention Generative Adversarial Networks
    Zhang, Han
    Goodfellow, Ian
    Metaxas, Dimitris
    Odena, Augustus
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [44] Rethinking the Self-Attention in Vision Transformers
    Kim, Kyungmin
    Wu, Bichen
    Dai, Xiaoliang
    Zhang, Peizhao
    Yan, Zhicheng
    Vajda, Peter
    Kim, Seon
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3065 - 3069
  • [45] Relative molecule self-attention transformer
    Łukasz Maziarka
    Dawid Majchrowski
    Tomasz Danel
    Piotr Gaiński
    Jacek Tabor
    Igor Podolak
    Paweł Morkisz
    Stanisław Jastrzębski
    Journal of Cheminformatics, 16
  • [46] Self-Attention ConvLSTM for Spatiotemporal Prediction
    Lin, Zhihui
    Li, Maomao
    Zheng, Zhuobin
    Cheng, Yangyang
    Yuan, Chun
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11531 - 11538
  • [47] Non-parallel text style transfer with domain adaptation and an attention model
    Mingxuan Hu
    Min He
    Applied Intelligence, 2021, 51 : 4609 - 4622
  • [48] Non-parallel text style transfer with domain adaptation and an attention model
    Hu, Mingxuan
    He, Min
    APPLIED INTELLIGENCE, 2021, 51 (07) : 4609 - 4622
  • [49] Pyramid Self-attention for Semantic Segmentation
    Qi, Jiyang
    Wang, Xinggang
    Hu, Yao
    Tang, Xu
    Liu, Wenyu
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 480 - 492
  • [50] Anisotropy Is Inherent to Self-Attention in Transformers
    Godey, Nathan
    de la Clergerie, Eric
    Sagot, Benoit
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 35 - 48