Event-driven Video Frame Synthesis

被引:35
|
作者
Wang, Zihao W. [1 ]
Jiang, Weixin [1 ]
He, Kuan [1 ]
Shi, Boxin [2 ]
Katsaggelos, Aggelos [1 ]
Cossairt, Oliver [1 ]
机构
[1] Northwestern Univ, Evanston, IL 60208 USA
[2] Peking Univ, Beijing, Peoples R China
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW) | 2019年
关键词
INTERPOLATION;
D O I
10.1109/ICCVW.2019.00532
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal Video Frame Synthesis (TVFS) aims at synthesizing novel frames at timestamps different from existing frames, which has wide applications in video codec, editing and analysis. In this paper, we propose a high frame-rate TVFS framework which takes hybrid input data from a low-speed frame-based sensor and a high-speed event-based sensor. Compared to frame-based sensors, event-based sensors report brightness changes at very high speed, which may well provide useful spatio-temoral information for high frame-rate TVFS. Therefore, we first introduce a differentiable fusion model to approximate the dual-modal physical sensing process, unifying a variety of TVFS scenarios, e.g., interpolation, prediction and motion deblur. Our differentiable model enables iterative optimization of the latent video tensor via autodifferentiation, which propagates the gradients of a loss function defined on the measured data. Our differentiable model-based reconstruction does not involve training, yet is parallelizable and can be implemented on machine learning platforms (such as TensorFlow). Second, we develop a deep learning strategy to enhance the results from the first step, which we refer as a residual "denoising" process. Our trained "denoiser" is beyond Gaussian denoisers and shows properties such as contrast enhancement and motion awareness. We show that our framework is capable of handling challenging scenes including both fast motion and strong occlusions. Demo and code are released at: https://github.com/winswang/intevent-fusion/tree/win10.
引用
收藏
页码:4320 / 4329
页数:10
相关论文
共 50 条
  • [11] Multi-Scale Warping for Video Frame Interpolation
    Choi, Whan
    Koh, Yeong Jun
    Kim, Chang-Su
    IEEE ACCESS, 2021, 9 : 150470 - 150479
  • [12] Robust Video Frame Interpolation With Exceptional Motion Map
    Park, Minho
    Kim, Hak Gu
    Lee, Sangmin
    Ro, Yong Man
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (02) : 754 - 764
  • [13] Advancing Learned Video Compression With In-Loop Frame Prediction
    Yang, Ren
    Timofte, Radu
    Gool, Luc Van
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (05) : 2410 - 2423
  • [14] Flow Guidance Deformable Compensation Network for Video Frame Interpolation
    Lei, Pengcheng
    Fang, Faming
    Zeng, Tieyong
    Zhang, Guixu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1801 - 1812
  • [15] A Motion Refinement Network With Local Compensation for Video Frame Interpolation
    Wang, Kaiqiao
    Liu, Peng
    IEEE ACCESS, 2023, 11 : 103092 - 103101
  • [16] A Temporally-Aware Interpolation Network for Video Frame Inpainting
    Szeto, Ryan
    Sun, Ximeng
    Lu, Kunyi
    Corso, Jason J.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (05) : 1053 - 1068
  • [17] Residual Learning of Video Frame Interpolation Using Convolutional LSTM
    Suzuki, Keito
    Ikehara, Masaaki
    IEEE ACCESS, 2020, 8 : 134185 - 134193
  • [18] Synchronization-sensitive frame estimation: Video quality enhancement
    Aly, SG
    Youssef, A
    MULTIMEDIA TOOLS AND APPLICATIONS, 2002, 17 (2-3) : 233 - 255
  • [19] Synchronization-Sensitive Frame Estimation: Video Quality Enhancement
    Sherif G. Aly
    Abdou Youssef
    Multimedia Tools and Applications, 2002, 17 : 233 - 255
  • [20] Fine-Grained Motion Estimation for Video Frame Interpolation
    Yan, Bo
    Tan, Weimin
    Lin, Chuming
    Shen, Liquan
    IEEE TRANSACTIONS ON BROADCASTING, 2021, 67 (01) : 174 - 184