Arbitrary Style Transfer via Multi-Adaptation Network

被引:111
作者
Deng, Yingying [1 ,2 ]
Tang, Fan [2 ]
Dong, Weiming [2 ,3 ]
Sun, Wen [1 ,4 ]
Huang, Feiyue [5 ]
Xu, Changsheng [2 ,3 ]
机构
[1] UCAS, Sch Artificial Intelligence, Beijing, Peoples R China
[2] Chinese Acad Sci, NLPR, Inst Automat, Beijing, Peoples R China
[3] CASIA LLVis Joint Lab, Beijing, Peoples R China
[4] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[5] Tencent, Youtu Lab, Shenzhen, Peoples R China
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Arbitrary style transfer; Feature disentanglement; Adaptation;
D O I
10.1145/3394171.3414015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Arbitrary style transfer is a significant topic with research value and application prospect. A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting while synchronously maintaining the detailed content structure information. Style transfer approaches would initially learn content and style representations of the content and style references and then generate the stylized images guided by these representations. In this paper, we propose the multi-adaptation network which involves two self-adaptation (SA) modules and one co-adaptation (CA) module: the SA modules adaptively disentangle the content and style representations, i.e., content SA module uses position-wise self-attention to enhance content representation and style SA module uses channel-wise self-attention to enhance style representation; the CA module rearranges the distribution of style representation based on content representation distribution by calculating the local similarity between the disentangled content and style features in a non-local fashion. Moreover, a new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images, respectively. Various qualitative and quantitative experiments demonstrate that the proposed multi-adaptation network leads to better results than the state-of-the-art style transfer methods.
引用
收藏
页码:2719 / 2727
页数:9
相关论文
共 32 条
  • [21] Arbitrary Style Transfer with Style-Attentional Networks
    Park, Dae Young
    Lee, Kwang Hee
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5873 - 5881
  • [22] Wiki Art Gallery, Inc.: A Case for Critical Thinking
    Phillips, Fred
    Mackintosh, Brandy
    [J]. ISSUES IN ACCOUNTING EDUCATION, 2011, 26 (03): : 593 - 608
  • [23] Neural Style Transfer via Meta Networks
    Shen, Falong
    Yan, Shuicheng
    Zeng, Gang
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8061 - 8069
  • [24] Avatar-Net: Multi-scale Zero-shot Style Transfer by Feature Decoration
    Sheng, Lu
    Lin, Ziyi
    Shao, Jing
    Wang, Xiaogang
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8242 - 8250
  • [25] Shi YJ, 2019, ADV NEUR IN, V32
  • [26] Ulyanov D, 2016, PR MACH LEARN RES, V48
  • [27] Wang H., 2017, ARXIV170307255
  • [28] VR content creation and exploration with deep learning: A survey
    Wang, Miao
    Lyu, Xu-Quan
    Li, Yi-Jun
    Zhang, Fang-Lue
    [J]. COMPUTATIONAL VISUAL MEDIA, 2020, 6 (01) : 3 - 28
  • [29] Direction-aware Neural Style Transfer
    Wu, Hao
    Sun, Zhengxing
    Yuan, Weihang
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1163 - 1171
  • [30] Attention-aware Multi-stroke Style Transfer
    Yao, Yuan
    Ren, Jianqiang
    Xie, Xuansong
    Liu, Weidong
    Liu, Yong-Jin
    Wang, Jun
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1467 - 1475