Dynamic Instance Normalization for Arbitrary Style Transfer

被引:0
|
作者
Jing, Yongcheng [1 ]
Liu, Xiao [2 ]
Ding, Yukang [2 ]
Wang, Xinchao [3 ]
Ding, Errui [2 ]
Song, Mingli [1 ]
Wen, Shilei [2 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Baidu Inc, Dept Comp Vis Technol VIS, Beijing, Peoples R China
[3] Stevens Inst Technol, Hoboken, NJ 07030 USA
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prior normalization methods rely on affine transformations to produce arbitrary image style transfers, of which the parameters are computed in a pre-defined way. Such manually-defined nature eventually results in the high-cost and shared encoders for both style and content encoding, making style transfer systems cumbersome to be deployed in resource-constrained environments like on the mobile-terminal side. In this paper. we propose a new and generalized normalization module, termed as Dynamic Instance Normalization (DIN). that allows for flexible and more efficient arbitrary style transfers. Comprising an instance normalization and a dynamic convolution, DIN encodes a style image into learnable convolution parameters, upon which the content image is stylized. Unlike conventional methods that use shared complex encoders to encode content and style, the proposed DIN introduces a sophisticated style encoder, yet comes with a compact and lightweight content encoder for fast inference. Experimental results demonstrate that the proposed approach yields very encouraging results on challenging style patterns and, to our best knowledge, for the first time enables an arbitrary style transfer using MobileNet-based lightweight architecture, leading to a reduction factor of more than twenty in computational cost as compared to existing approaches. Furthermore, the proposed DIN provides flexible support for state-of-the-art convolutional operations. and thus triggers novel functionalities, such as uniform-stroke placement for nonnatural images and automatic spatial-stroke control.
引用
收藏
页码:4369 / 4376
页数:8
相关论文
共 50 条
  • [21] Intrinsic-style distribution matching for arbitrary style transfer
    Liu, Meichen
    Lin, Songnan
    Zhang, Hengmin
    Zha, Zhiyuan
    Wen, Bihan
    KNOWLEDGE-BASED SYSTEMS, 2024, 296
  • [22] Arbitrary style transfer via content consistency and style consistency
    Xiaoming Yu
    Gan Zhou
    The Visual Computer, 2024, 40 : 1369 - 1382
  • [23] Text Style Transfer via Learning Style Instance Supported Latent Space
    Yi, Xiaoyuan
    Liu, Zhenghao
    Li, Wenhao
    Sun, Maosong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3801 - 3807
  • [24] Adversarial training for fast arbitrary style transfer
    Xu, Zheng
    Wilber, Michael
    Fang, Chen
    Hertzmann, Aaron
    Jin, Hailin
    COMPUTERS & GRAPHICS-UK, 2020, 87 : 1 - 11
  • [25] DETAIL-PRESERVING ARBITRARY STYLE TRANSFER
    Zhu, Ling
    Liu, Shiguang
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [26] A SIMPLE WAY OF MULTIMODAL AND ARBITRARY STYLE TRANSFER
    Anh-Duc Nguyen
    Choi, Seonghwa
    Kim, Woojae
    Lee, Sanghoon
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1752 - 1756
  • [27] Arbitrary Style Transfer with Adaptive Channel Network
    Wang, Yuzhuo
    Geng, Yanlin
    MULTIMEDIA MODELING (MMM 2022), PT I, 2022, 13141 : 481 - 492
  • [28] CLAST: Contrastive Learning for Arbitrary Style Transfer
    Wang, Xinhao
    Wang, Wenjing
    Yang, Shuai
    Liu, Jiaying
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6761 - 6772
  • [29] Assessing arbitrary style transfer like an artist
    Chen, Hangwei
    Shao, Feng
    Mu, Baoyang
    Jiang, Qiuping
    DISPLAYS, 2024, 85
  • [30] Arbitrary Style Transfer with Deep Feature Reshuffle
    Gu, Shuyang
    Chen, Congliang
    Liao, Jing
    Yuan, Lu
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8222 - 8231