Arbitrary Style Transfer via Multi-Adaptation Network

被引:111
作者
Deng, Yingying [1 ,2 ]
Tang, Fan [2 ]
Dong, Weiming [2 ,3 ]
Sun, Wen [1 ,4 ]
Huang, Feiyue [5 ]
Xu, Changsheng [2 ,3 ]
机构
[1] UCAS, Sch Artificial Intelligence, Beijing, Peoples R China
[2] Chinese Acad Sci, NLPR, Inst Automat, Beijing, Peoples R China
[3] CASIA LLVis Joint Lab, Beijing, Peoples R China
[4] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[5] Tencent, Youtu Lab, Shenzhen, Peoples R China
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Arbitrary style transfer; Feature disentanglement; Adaptation;
D O I
10.1145/3394171.3414015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Arbitrary style transfer is a significant topic with research value and application prospect. A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting while synchronously maintaining the detailed content structure information. Style transfer approaches would initially learn content and style representations of the content and style references and then generate the stylized images guided by these representations. In this paper, we propose the multi-adaptation network which involves two self-adaptation (SA) modules and one co-adaptation (CA) module: the SA modules adaptively disentangle the content and style representations, i.e., content SA module uses position-wise self-attention to enhance content representation and style SA module uses channel-wise self-attention to enhance style representation; the CA module rearranges the distribution of style representation based on content representation distribution by calculating the local similarity between the disentangled content and style features in a non-local fashion. Moreover, a new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images, respectively. Various qualitative and quantitative experiments demonstrate that the proposed multi-adaptation network leads to better results than the state-of-the-art style transfer methods.
引用
收藏
页码:2719 / 2727
页数:9
相关论文
共 32 条
  • [1] [Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.244
  • [2] Chen Tian Qi, 2016, NIPS CONSTR MACH LEA
  • [3] Dual Attention Network for Scene Segmentation
    Fu, Jun
    Liu, Jing
    Tian, Haijie
    Li, Yong
    Bao, Yongjun
    Fang, Zhiwei
    Lu, Hanqing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3141 - 3149
  • [4] Gatys L.A., 2016, J. Vis, V16, P326, DOI [10.1167/16.12.326, DOI 10.1167/16.12.326]
  • [5] Controlling Perceptual Factors in Neural Style Transfer
    Gatys, Leon A.
    Ecker, Alexander S.
    Bethge, Matthias
    Hertzmann, Aaron
    Shechtman, Eli
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3730 - 3738
  • [6] Gonzalez-Garcia A., 2018, NIPS, V31, P1287
  • [7] Arbitrary Style Transfer with Deep Feature Reshuffle
    Gu, Shuyang
    Chen, Congliang
    Liao, Jing
    Yuan, Lu
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8222 - 8231
  • [8] Huang XW, 2018, PROCEEDINGS OF 2018 IEEE INTERNATIONAL CONFERENCE ON INTEGRATED CIRCUITS, TECHNOLOGIES AND APPLICATIONS (ICTA 2018), P172, DOI 10.1109/CICTA.2018.8706048
  • [9] Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
    Huang, Xun
    Belongie, Serge
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1510 - 1519
  • [10] Jing Yongcheng, 2020, 34 AAAI C ART INT AA