A Motion Refinement Network With Local Compensation for Video Frame Interpolation

被引:0
作者
Wang, Kaiqiao [1 ]
Liu, Peng [1 ]
机构
[1] Chinese Acad Sci, Inst Acoust, Hainan Acoust Lab, Haikou 570100, Peoples R China
关键词
Optical flow; Interpolation; Feature extraction; Estimation; Optical distortion; Computational modeling; Motion control; Video frame interpolation; comprehensive contextual feature extraction; motion-guided feature fusion; local compensation; motion refinement network; OPTICAL-FLOW;
D O I
10.1109/ACCESS.2023.3317515
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video frame interpolation (VFI) is a challenging yet promising task that involves synthesizing intermediate frames from two given frames. State-of-the-art approaches have made significant progress by directly synthesizing images using forward optical flow. However, these methods often encounter issues such as occlusion and pixel blurring when handling large motion scenes. To address these challenges, this paper proposes a novel approach based on the concept of local compensation. By adopting this approach, more refined optical flow estimation can be obtained, leading to higher-quality video frame interpolation results. Specifically, we introduce two modules, namely the Comprehensive Contextual Feature Extraction (CCFE) module and Motion-Guided Feature Fusion (MGFF) module, to enable local compensation of optical flow estimation. The CCFE module is designed to be embedded in each layer of the image pyramid structure. It aims to encourage the model to extract clean and sufficiently rich contextual information from the input images. On the other hand, the MGFF can guide the multi-source features fusion based on motion features, making the feature fusion of moving objects more precise, thus providing local compensation for optical flow estimation. Extensive experimental results demonstrate that incorporating our proposed modules into the baseline network significantly enhances the performance of video frame interpolation.
引用
收藏
页码:103092 / 103101
页数:10
相关论文
共 40 条
[1]   A database and evaluation methodology for optical flow [J].
Baker, Simon ;
Scharstein, Daniel ;
Lewis, J. P. ;
Roth, Stefan ;
Black, Michael J. ;
Szeliski, Richard .
2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1-6, 2007, :588-595
[2]   Depth-Aware Video Frame Interpolation [J].
Bao, Wenbo ;
Lai, Wei-Sheng ;
Ma, Chao ;
Zhang, Xiaoyun ;
Gao, Zhiyong ;
Yang, Ming-Hsuan .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3698-3707
[3]   MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement [J].
Bao, Wenbo ;
Lai, Wei-Sheng ;
Zhang, Xiaoyun ;
Gao, Zhiyong ;
Yang, Ming-Hsuan .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (03) :933-948
[4]   Learning to Synthesize Motion Blur [J].
Brooks, Tim ;
Barron, Jonathan T. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :6833-6841
[5]  
CHARBONNIER P, 1994, IEEE IMAGE PROC, P168
[6]   PDWN: Pyramid Deformable Warping Network for Video Interpolation [J].
Chen, Zhiqi ;
Wang, Ran ;
Liu, Haojie ;
Wang, Yao .
IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2021, 2 :413-424
[7]  
Choi M, 2020, AAAI CONF ARTIF INTE, V34, P10663
[8]   Many-to-many Splatting for Efficient Video Frame Interpolation [J].
Hu, Ping ;
Niklaus, Simon ;
Sclaroff, Stan ;
Saenko, Kate .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :3543-3552
[9]   Real-Time Intermediate Flow Estimation for Video Frame Interpolation [J].
Huang, Zhewei ;
Zhang, Tianyuan ;
Heng, Wen ;
Shi, Boxin ;
Zhou, Shuchang .
COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 :624-642
[10]   Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation [J].
Jiang, Huaizu ;
Sun, Deqing ;
Jampani, Varun ;
Yang, Ming-Hsuan ;
Learned-Miller, Erik ;
Kautz, Jan .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9000-9008