A Motion Distillation Framework for Video Frame Interpolation

被引:0
作者
Zhou, Shili [1 ]
Tan, Weimin [1 ]
Yan, Bo [1 ]
机构
[1] Fudan Univ, Shanghai Collaborat Innovat Ctr Intelligent Visual, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 201203, Peoples R China
关键词
Training; Computational modeling; Optical flow; Kernel; Interpolation; Motion estimation; Correlation; Deep learning; frame interpolation; knowledge distillation; optical flow; IMAGE QUALITY; OPTICAL-FLOW; ENHANCEMENT;
D O I
10.1109/TMM.2023.3314971
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, we have seen the success of deep video enhancement models. However, the performance improvement of new methods has gradually entered a bottleneck period. Optimizing model structures or increasing training data brings less and less improvement. We argue that existing models with advanced structures have not fully demonstrated their performance and demand further exploration. In this study, we statistically analyze the relationship between motion estimation accuracy and video interpolation quality of existing video frame interpolation methods, and find that only supervising the final output leads to inaccurate motion and further affects the interpolation performance. Based on this important observation, we propose a general motion distillation framework that can be widely applied to flow-based and kernel-based video frame interpolation methods. Specifically, we begin by training a teacher model, which uses the ground-truth target frame and adjacent frames to estimate motion. These motion estimates then guide the training of a student model for video frame interpolation. Our experimental results demonstrate the effectiveness of this approach in enhancing performance across diverse advanced video interpolation model structures. For example, after applying our motion distillation framework, the CtxSyn model achieves a PSNR gain of 3.047 dB.
引用
收藏
页码:3728 / 3740
页数:13
相关论文
共 50 条
  • [41] JNMR: Joint Non-Linear Motion Regression for Video Frame Interpolation
    Liu M.
    Xu C.
    Yao C.
    Lin C.
    Zhao Y.
    IEEE Transactions on Image Processing, 2023, 32 : 5283 - 5295
  • [42] ANGULARLY CONSISTENT LIGHT FIELD VIDEO INTERPOLATION
    David, Pierre
    Le Pendu, Mikael
    Guillemot, Christine
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [43] Video Frame Interpolation via Generalized Deformable Convolution
    Shi, Zhihao
    Liu, Xiaohong
    Shi, Kangdi
    Dai, Linhui
    Chen, Jun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 426 - 439
  • [44] Video Frame Interpolation With Stereo Event and Intensity Cameras
    Ding, Chao
    Lin, Mingyuan
    Zhang, Haijian
    Liu, Jianzhuang
    Yu, Lei
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9187 - 9202
  • [45] Edge-Aware Network for Flow-Based Video Frame Interpolation
    Zhao, Bin
    Li, Xuelong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (01) : 1401 - 1408
  • [46] ATCA: AN ARC TRAJECTORY BASED MODEL WITH CURVATURE ATTENTION FOR VIDEO FRAME INTERPOLATION
    Liu, Jinfeng
    Kong, Lingtong
    Yang, Jie
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1486 - 1490
  • [47] UPR-Net: A Unified Pyramid Recurrent Network for Video Frame Interpolation
    Jin, Xin
    Wu, Longhai
    Chen, Jie
    Chen, Youxin
    Koo, Jayoon
    Hahm, Cheul-Hee
    Chen, Zhao-Min
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (01) : 16 - 30
  • [48] Softmax Splatting for Video Frame Interpolation
    Niklaus, Simon
    Liu, Feng
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5436 - 5445
  • [49] XVFI: eXtreme Video Frame Interpolation
    Sim, Hyeonjun
    Oh, Jihyong
    Kim, Munchurl
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14469 - 14478
  • [50] A CONCATENATED MODEL FOR VIDEO FRAME INTERPOLATION
    Chen, Ying
    Smith, Mark J. T.
    2009 IEEE 13TH DIGITAL SIGNAL PROCESSING WORKSHOP & 5TH IEEE PROCESSING EDUCATION WORKSHOP, VOLS 1 AND 2, PROCEEDINGS, 2009, : 565 - 569