Edge-Aware Network for Flow-Based Video Frame Interpolation

被引:10
作者
Zhao, Bin [1 ]
Li, Xuelong [1 ]
机构
[1] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Interpolation; Estimation; Task analysis; Kernel; Image edge detection; Computer architecture; Training; Adversarial training; edge-aware; flow estimation; video frame interpolation;
D O I
10.1109/TNNLS.2022.3178281
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video frame interpolation can up-convert the frame rate and enhance the video quality. In recent years, although interpolation performance has achieved great success, image blur usually occurs at object boundaries owing to the large motion. It has been a long-standing problem and has not been addressed yet. In this brief, we propose to reduce the image blur and get the clear shape of objects by preserving the edges in the interpolated frames. To this end, the proposed edge-aware network (EA-Net) integrates the edge information into the frame interpolation task. It follows an end-to-end architecture and can be separated into two stages, i.e., edge-guided flow estimation and edge-protected frame synthesis. Specifically, in the flow estimation stage, three edge-aware mechanisms are developed to emphasize the frame edges in estimating flow maps, so that the edge maps are taken as auxiliary information to provide more guidance to boost the flow accuracy. In the frame synthesis stage, the flow refinement module is designed to refine the flow map, and the attention module is carried out to adaptively focus on the bidirectional flow maps when synthesizing the intermediate frames. Furthermore, the frame and edge discriminators are adopted to conduct the adversarial training strategy, so as to enhance the reality and clarity of synthesized frames. Experiments on three benchmarks, including Vimeo90k, UCF101 for single-frame interpolation, and Adobe240-fps for multiframe interpolation, have demonstrated the superiority of the proposed EA-Net for the video frame interpolation task.
引用
收藏
页码:1401 / 1408
页数:8
相关论文
共 50 条
[11]   A multistage motion vector processing method for motion-compensated frame interpolation [J].
Huang, Ai-Mei ;
Nguyen, Truong Q. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2008, 17 (05) :694-708
[12]   Correlation-Based Motion Vector Processing With Adaptive Interpolation Scheme for Motion-Compensated Frame Interpolation [J].
Huang, Ai-Mei ;
Nguyen, Truong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2009, 18 (04) :740-752
[13]   Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation [J].
Jiang, Huaizu ;
Sun, Deqing ;
Jampani, Varun ;
Yang, Ming-Hsuan ;
Learned-Miller, Erik ;
Kautz, Jan .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9000-9008
[14]   Learning to Extract Flawless Slow Motion from Blurry Videos [J].
Jin, Meiguang ;
Hu, Zhe ;
Favaro, Paolo .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :8104-8113
[15]   AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation [J].
Lee, Hyeongmin ;
Kim, Taeoh ;
Chung, Tae-young ;
Pak, Daehyun ;
Ban, Yuseok ;
Lee, Sangyoun .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5315-5324
[16]  
Li HP, 2020, INT CONF ACOUST SPEE, P2613, DOI [10.1109/icassp40776.2020.9053987, 10.1109/ICASSP40776.2020.9053987]
[17]  
Li XL, 2017, PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2201
[18]  
Li XL, 2017, AAAI CONF ARTIF INTE, P4147
[19]  
Liu YL, 2019, AAAI CONF ARTIF INTE, P8794
[20]  
Liu ZW, 2017, IEEE I CONF COMP VIS, P4473, DOI [10.1109/ICCV.2017.478, 10.1109/ICCVW.2017.361]