Adversarial Training Inspired Self-attention Flow for Universal Image Style Transfer

被引:1
|
作者
Dang, Kaiheng [1 ]
Lai, Jianhuang [1 ,2 ,3 ]
Dong, Junhao [1 ]
Xie, Xiaohua [1 ,2 ,3 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Guangdong Key Lab Informat Secur Technol, Guangzhou, Peoples R China
[3] Minist Educ, Key Lab Machine Intelligence & Adv Comp, Guangzhou, Peoples R China
来源
关键词
Image style transfer; Flow-based model; Adversarial robust feature;
D O I
10.1007/978-3-031-02444-3_36
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Flow-based model receives more and more attention and has been recently applied to image style transfer. While these methods can achieve splendid performance, there remains a problem that the stacked convolutions are inefficient and cannot focus on valuable features. Starting with training an adversarial robust model, we find that no matter in the perceptual loss network or the transfer model, robust features are beneficial for performing better universal style transfer (UST) results. Based on this initial conclusion, we improve the current Glow model by applying self-attention mechanism with three different blocks using ViT, non-local and involution, respectively. Designed feature extraction blocks can capture more valuable deep features with fewer parameters, making Glow more effective and efficient in UST. Our improved Glow can generate artistic images that look nicer and more stable. Both visual results and quantitative metrics are compared to prove that our improvement makes Glow more suitable for UST.
引用
收藏
页码:476 / 489
页数:14
相关论文
共 50 条
  • [1] Arbitrary Style Transfer with Parallel Self-Attention
    Zhang, Tiange
    Gao, Ying
    Gao, Feng
    Qi, Lin
    Dong, Junyu
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1406 - 1413
  • [2] Consistent Arbitrary Style Transfer Using Consistency Training and Self-Attention Module
    Zhou, Zheng
    Wu, Yue
    Zhou, Yicong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16845 - 16856
  • [3] Consistent Arbitrary Style Transfer Using Consistency Training and Self-Attention Module
    Zhou, Zheng
    Wu, Yue
    Zhou, Yicong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16845 - 16856
  • [4] Prostate MR Image Segmentation With Self-Attention Adversarial Training Based on Wasserstein Distance
    Su, Chengwei
    Huang, Renxiang
    Liu, Chang
    Yin, Tailang
    Du, Bo
    IEEE ACCESS, 2019, 7 : 184276 - 184284
  • [5] Subgraph representation learning with self-attention and free adversarial training
    Qin, Denggao
    Tang, Xianghong
    Lu, Jianguang
    APPLIED INTELLIGENCE, 2024, : 7012 - 7029
  • [6] Speaker identification for household scenarios with self-attention and adversarial training
    Li, Ruirui
    Jiang, Jyun-Yu
    Wu, Xian
    Hsieh, Chu-Cheng
    Stolcke, Andreas
    Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2020, 2020-October : 2272 - 2276
  • [7] Speaker Identification for Household Scenarios with Self-attention and Adversarial Training
    Li, Ruirui
    Joang, Jyun-Yu
    Wu, Xian
    Hsieh, Chu-Cheng
    Stolcke, Andreas
    INTERSPEECH 2020, 2020, : 2272 - 2276
  • [8] Adversarial Latent Autoencoder with Self-Attention for Structural Image Synthesis
    Fan, Jiajie
    Vuaille, Laure
    Back, Thomas
    Wang, Hao
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 119 - 124
  • [9] Self-Attention Generative Adversarial Networks
    Zhang, Han
    Goodfellow, Ian
    Metaxas, Dimitris
    Odena, Augustus
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [10] Adversarial Self-Attention for Language Understanding
    Wu, Hongqiu
    Ding, Ruixue
    Zhao, Hai
    Xie, Pengjun
    Huang, Fei
    Zhang, Min
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13727 - 13735