Event-driven Video Frame Synthesis

被引:35
|
作者
Wang, Zihao W. [1 ]
Jiang, Weixin [1 ]
He, Kuan [1 ]
Shi, Boxin [2 ]
Katsaggelos, Aggelos [1 ]
Cossairt, Oliver [1 ]
机构
[1] Northwestern Univ, Evanston, IL 60208 USA
[2] Peking Univ, Beijing, Peoples R China
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW) | 2019年
关键词
INTERPOLATION;
D O I
10.1109/ICCVW.2019.00532
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal Video Frame Synthesis (TVFS) aims at synthesizing novel frames at timestamps different from existing frames, which has wide applications in video codec, editing and analysis. In this paper, we propose a high frame-rate TVFS framework which takes hybrid input data from a low-speed frame-based sensor and a high-speed event-based sensor. Compared to frame-based sensors, event-based sensors report brightness changes at very high speed, which may well provide useful spatio-temoral information for high frame-rate TVFS. Therefore, we first introduce a differentiable fusion model to approximate the dual-modal physical sensing process, unifying a variety of TVFS scenarios, e.g., interpolation, prediction and motion deblur. Our differentiable model enables iterative optimization of the latent video tensor via autodifferentiation, which propagates the gradients of a loss function defined on the measured data. Our differentiable model-based reconstruction does not involve training, yet is parallelizable and can be implemented on machine learning platforms (such as TensorFlow). Second, we develop a deep learning strategy to enhance the results from the first step, which we refer as a residual "denoising" process. Our trained "denoiser" is beyond Gaussian denoisers and shows properties such as contrast enhancement and motion awareness. We show that our framework is capable of handling challenging scenes including both fast motion and strong occlusions. Demo and code are released at: https://github.com/winswang/intevent-fusion/tree/win10.
引用
收藏
页码:4320 / 4329
页数:10
相关论文
共 50 条
  • [1] Event-Driven Video Abstraction and Visualization
    Jeho Nam
    Ahmed H. Tewfik
    Multimedia Tools and Applications, 2002, 16 : 55 - 77
  • [2] Event-driven video abstraction and visualization
    Nam, J
    Tewfik, AH
    MULTIMEDIA TOOLS AND APPLICATIONS, 2002, 16 (1-2) : 55 - 77
  • [3] Event-Driven Heterogeneous Network for Video Deraining
    Fu, Xueyang
    Cao, Chengzhi
    Xu, Senyan
    Zhang, Fanrui
    Wang, Kunyu
    Zha, Zheng-Jun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (12) : 5841 - 5861
  • [4] Efficient event-driven frame capture for CMOS imagers
    Susli, Mohamad
    Boussaid, Farid
    Shoushun, Chen
    Bermak, Amine
    2007 9TH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, VOLS 1-3, 2007, : 1330 - +
  • [5] Event-driven video awareness providing physical security
    Georgakopoulos, Dimitrios
    Baker, Donald
    Nodine, Marian
    Cichoki, Andrzej
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2007, 10 (01): : 85 - 109
  • [6] Event-driven weakly supervised video anomaly detection
    Sun, Shengyang
    Gong, Xiaojin
    IMAGE AND VISION COMPUTING, 2024, 149
  • [7] Event-driven Video Awareness Providing Physical Security
    Dimitrios Georgakopoulos
    Donald Baker
    Marian Nodine
    Andrzej Cichoki
    World Wide Web, 2007, 10 : 85 - 109
  • [8] Event-driven video adaptation: A powerful tool for industrial video supervision
    Doulamis, Anastasios
    MULTIMEDIA TOOLS AND APPLICATIONS, 2014, 69 (02) : 339 - 358
  • [9] Event-driven video adaptation: A powerful tool for industrial video supervision
    Anastasios Doulamis
    Multimedia Tools and Applications, 2014, 69 : 339 - 358
  • [10] Mixed Frame-/Event-Driven Fast Pedestrian Detection
    Jiang, Zhuangyi
    Xia, Pengfei
    Huang, Kai
    Stechele, Walter
    Chen, Guang
    Bing, Zhenshan
    Knoll, Alois
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 8332 - 8338