Local and nonlocal flow-guided video inpainting

被引:0
作者
Jing Wang
Zongju Yang
Zhanqiang Huo
Wei Chen
机构
[1] Henan Polytechnic University,School of Software
来源
Multimedia Tools and Applications | 2024年 / 83卷
关键词
Video inpainting; Deep flow completion network; Sample window; Local and nonlocal; Edge completion;
D O I
暂无
中图分类号
学科分类号
摘要
The purpose of video inpainting is to get a reasonable content from the video to fill in the missing region. Video is a continuous four-dimensional sequence in the temporal dimension. It’s difficult to ensure the temporal continuity of video by inpaint video frames respectively along the time dimension. Video inpainting has gone from the traditional inpainting algorithm to the advanced learning based inpainting method. It has been able to inpainting for a variety of scenes. However, there are still unresolved questions in video inpainting, and video inpainting is still a challenging task. Existing works focused on fixing the problem of object removal in the video, and neglected the importance of inpainting the occlusion scene in the middle region. For the occlusion problem in the middle region, we propose a local and nonlocal optical flow video inpainting framework. First, according to the forward and backward directions of the reference frame and the sampling window, we divide the video into local and nonlocal frames, extract the local and nonlocal optical flow and feed them to the residual network for rough inpainting. Next, our approach extracts and completes the edges of the predicted flow. Finally, the composed optical flow field guides the propagation of pixels to inpaint the video content. Experimental results on DAVIS and YouTube-VOS datasets show that our method has significantly improved in terms of the image quality and optical flow quality compared with the state of the art. Codes are available at {https://github.com/lengfengio/LNFVI.git.}
引用
收藏
页码:10321 / 10340
页数:19
相关论文
共 80 条
[1]  
Barnes C(2009)Patchmatch: a randomized correspondence algorithm for structural image editing ACM Trans Graph 28 24-466
[2]  
Shechtman E(1995)The computation of optical flow ACM Comput Surv (CSUR) 27 433-1141
[3]  
Finkelstein A(2012)Accuracy of gradient-based optical flow estimation in high-frame-rate video analysis IEICE Trans Inform Syst 95 1130-4647
[4]  
Goldman DB(2020)Generative adversarial networks: a literature review KSII Trans Internet Inform Syst (TIIS) 14 4625-4831
[5]  
Beauchemin SS(2019)Video saliency detection via sparsity-based reconstruction and propagation IEEE Trans Image Process 28 4819-725
[6]  
Barron JL(2001)On the canny edge detector Pattern Recogn 34 721-1719
[7]  
Chen L(2018)Image inpainting using nonlocal texture matching and nonlinear filtering IEEE Trans Image Process 28 1705-2028
[8]  
Takaki T(2020)Image inpainting: a review Neural Process Lett 51 2007-303
[9]  
Ishii I(1990)Self-attention and behavior: a review and theoretical update Adv Exper Soc Psychol 23 249-11
[10]  
Cheng J(2016)Temporally coherent completion of dynamic video ACM Trans Graph (TOG) 35 1-14