Bringing Events into Video Deblurring with Non-consecutively Blurry Frames

被引:43
作者
Shang, Wei [1 ]
Ren, Dongwei [1 ]
Zou, Dongqing [2 ,5 ]
Ren, Jimmy S. [2 ,5 ]
Luo, Ping [3 ]
Zuo, Wangmeng [1 ,4 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin, Peoples R China
[2] SenseTime Res, Hong Kong, Peoples R China
[3] Univ Hong Kong, Hong Kong, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
[5] Shanghai Jiao Tong Univ, Qing Yuan Res Inst, Shanghai, Peoples R China
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV48922.2021.00449
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, video deblurring has attracted considerable research attention, and several works suggest that events at high time rate can benefit deblurring. Existing video deblurring methods assume consecutively blurry frames, while neglecting the fact that sharp frames usually appear nearby blurry frame. In this paper, we develop a principled framework D2Nets for video deblurring to exploit non-consecutively blurry frames, and propose a flexible event fusion module (EFM) to bridge the gap between event-driven and video deblurring. In D2Nets, we propose to first detect nearest sharp frames (NSFs) using a bidirectional LSTM detector, and then perform deblurring guided by NSFs. Furthermore, the proposed EFM is flexible to be incorporated into D2 Nets, in which events can be leveraged to notably boost the deblurring performance. EFM can also be easily incorporated into existing deblurring networks, making event-driven deblurring task benefit from state-of-the-art deblurring methods. On synthetic and real-world blurry datasets, our methods achieve better results than competing methods, and EFM not only benefits D2Nets but also significantly improves the competing deblurring networks.
引用
收藏
页码:4511 / 4520
页数:10
相关论文
共 38 条
[1]   Burst Image Deblurring Using Permutation Invariant Convolutional Neural Networks [J].
Aittala, Miika ;
Durand, Fredo .
COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 :748-764
[2]   A Low Power, High Throughput, Fully Event-Based Stereo System [J].
Andreopoulos, Alexander ;
Kashyap, Hirak J. ;
Nayak, Tapan K. ;
Amir, Arnon ;
Flickner, Myron D. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7532-7542
[3]   A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor [J].
Brandli, Christian ;
Berner, Raphael ;
Yang, Minhao ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) :2333-2341
[4]   An Arabidopsis E3 ligase HUB2 increases histone H2B monoubiquitination and enhances drought tolerance in transgenic cotton [J].
Chen, Hong ;
Feng, Hao ;
Zhang, Xueyan ;
Zhang, Chaojun ;
Wang, Tao ;
Dong, Jiangli .
PLANT BIOTECHNOLOGY JOURNAL, 2019, 17 (03) :556-568
[5]   End-to-End Learning of Representations for Asynchronous Event-Based Data [J].
Gehrig, Daniel ;
Loquercio, Antonio ;
Derpanis, Konstantinos G. ;
Scaramuzza, Davide .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :5632-5642
[6]  
Hochreiter S, 2001, LECT NOTES COMPUT SC, V2130, P661
[7]  
Hsiao YM, 2013, I SYMP CONSUM ELECTR, P81
[8]   Learning Event-Based Motion Deblurring [J].
Jiang, Zhe ;
Zhang, Yu ;
Zou, Dongqing ;
Ren, Jimmy ;
Lv, Jiancheng ;
Liu, Yebin .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3317-3326
[9]  
KIM BJ, 2018, EUR C COMP VIS, P104
[10]   Online Video Deblurring via Dynamic Temporal Blending Network [J].
Kim, Tae Hyun ;
Lee, Kyoung Mu ;
Schoelkopf, Bernhard ;
Hirsch, Michael .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4058-4067