Evolvement Constrained Adversarial Learning for Video Style Transfer

被引:1
作者
Li, Wenbo [1 ]
Wen, Longyin [2 ]
Bian, Xiao [3 ]
Lyu, Siwei [1 ]
机构
[1] SUNY Albany, Albany, NY 12222 USA
[2] JD Finance AI Lab, San Francisco, CA USA
[3] GE Global Res, Niskayuna, NY USA
来源
COMPUTER VISION - ACCV 2018, PT I | 2019年 / 11361卷
关键词
D O I
10.1007/978-3-030-20887-5_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video style transfer is a useful component for applications such as augmented reality, non-photorealistic rendering, and interactive games. Many existing methods use optical flow to preserve the temporal smoothness of the synthesized video. However, the estimation of optical flow is sensitive to occlusions and rapid motions. Thus, in this work, we introduce a novel evolve-sync loss computed by evolvements to replace optical flow. Using this evolve-sync loss, we build an adversarial learning framework, termed as Video Style Transfer Generative Adversarial Network (VST-GAN), which improves upon the MGAN method for image style transfer for more efficient video style transfer. We perform extensive experimental evaluations of our method and show quantitative and qualitative improvements over the state-of-the-art methods.
引用
收藏
页码:232 / 248
页数:17
相关论文
共 34 条
[1]  
[Anonymous], 2017, PROC IEEE C COMPUT V
[2]  
[Anonymous], TOG
[3]  
[Anonymous], 2017, CORR
[4]  
[Anonymous], 2016, CVPR, DOI DOI 10.1109/CVPR.2016.265
[5]  
[Anonymous], 2017, ICCV
[6]  
[Anonymous], CORR
[7]  
[Anonymous], LNCS
[8]  
[Anonymous], 2015, CoRR
[9]  
[Anonymous], ICCV
[10]  
[Anonymous], 2018, IEEE C COMP VIS PATT