A Motion Distillation Framework for Video Frame Interpolation

被引:0
作者
Zhou, Shili [1 ]
Tan, Weimin [1 ]
Yan, Bo [1 ]
机构
[1] Fudan Univ, Shanghai Collaborat Innovat Ctr Intelligent Visual, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 201203, Peoples R China
关键词
Training; Computational modeling; Optical flow; Kernel; Interpolation; Motion estimation; Correlation; Deep learning; frame interpolation; knowledge distillation; optical flow; IMAGE QUALITY; OPTICAL-FLOW; ENHANCEMENT;
D O I
10.1109/TMM.2023.3314971
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, we have seen the success of deep video enhancement models. However, the performance improvement of new methods has gradually entered a bottleneck period. Optimizing model structures or increasing training data brings less and less improvement. We argue that existing models with advanced structures have not fully demonstrated their performance and demand further exploration. In this study, we statistically analyze the relationship between motion estimation accuracy and video interpolation quality of existing video frame interpolation methods, and find that only supervising the final output leads to inaccurate motion and further affects the interpolation performance. Based on this important observation, we propose a general motion distillation framework that can be widely applied to flow-based and kernel-based video frame interpolation methods. Specifically, we begin by training a teacher model, which uses the ground-truth target frame and adjacent frames to estimate motion. These motion estimates then guide the training of a student model for video frame interpolation. Our experimental results demonstrate the effectiveness of this approach in enhancing performance across diverse advanced video interpolation model structures. For example, after applying our motion distillation framework, the CtxSyn model achieves a PSNR gain of 3.047 dB.
引用
收藏
页码:3728 / 3740
页数:13
相关论文
共 50 条
  • [1] A Motion Refinement Network With Local Compensation for Video Frame Interpolation
    Wang, Kaiqiao
    Liu, Peng
    IEEE ACCESS, 2023, 11 : 103092 - 103101
  • [2] Video frame interpolation using deep cascaded network structure
    Yang, Yoonmo
    Oh, Byung Tae
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 89
  • [3] Fine-Grained Motion Estimation for Video Frame Interpolation
    Yan, Bo
    Tan, Weimin
    Lin, Chuming
    Shen, Liquan
    IEEE TRANSACTIONS ON BROADCASTING, 2021, 67 (01) : 174 - 184
  • [4] Video Frame Interpolation Based on Symmetric and Asymmetric Motions
    Choi, Whan
    Koh, Yeong Jun
    Kim, Chang-Su
    IEEE ACCESS, 2023, 11 : 22394 - 22403
  • [5] Estimation of accelerated motion for motion-compensated frame interpolation
    Csillag, P
    Boroczky, L
    VISUAL COMMUNICATIONS AND IMAGE PROCESSING '96, 1996, 2727 : 604 - 614
  • [6] Progressive Motion Boosting for Video Frame Interpolation
    Xiao, Jing
    Xu, Kangmin
    Hu, Mengshun
    Liao, Liang
    Wang, Zheng
    Lin, Chia-Wen
    Wang, Mi
    Satoh, Shin'ichi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8076 - 8090
  • [7] Forward Warping-Based Video Frame Interpolation Using a Motion Selective Network
    Heo, Jeonghwan
    Jeong, Jechang
    ELECTRONICS, 2022, 11 (16)
  • [8] FILM: Frame Interpolation for Large Motion
    Reda, Fitsum
    Kontkanen, Janne
    Tabellion, Eric
    Sun, Deqing
    Pantofaru, Caroline
    Curless, Brian
    COMPUTER VISION, ECCV 2022, PT VII, 2022, 13667 : 250 - 266
  • [9] Residual Learning of Video Frame Interpolation Using Convolutional LSTM
    Suzuki, Keito
    Ikehara, Masaaki
    IEEE ACCESS, 2020, 8 : 134185 - 134193
  • [10] Progressive Motion Context Refine Network for Efficient Video Frame Interpolation
    Kong, Lingtong
    Liu, Jinfeng
    Yang, Jie
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2338 - 2342