Efficient Spatio-Temporal Feature Extraction Recurrent Neural Network for Video Deblurring

被引:0
|
作者
Pu Z. [1 ]
Ma W. [1 ]
Mi Q. [1 ]
机构
[1] Faculty of Information Technology, Beijing University of Technology, Beijing
关键词
attention mechanism; feature extraction; recurrent neural network; video deblurring;
D O I
10.3724/SP.J.1089.2023.19685
中图分类号
学科分类号
摘要
Considering that existing recurrent neural network-based video deblurring methods are limited in cross-frame feature aggregation and computational efficiency, an efficient spatio-temporal feature extraction recurrent neural network is proposed. Firstly, we combine a residual dense module with the channel attention mechanism to efficiently extract discriminative features from each frame of a given sequence. Then, a spatio-temporal feature enhancement and fusion module is proposed to select features from the highly redundant and interfering sequential features and integrate them into the features of the current frame. Finally, the enhanced features of the current frame are converted into the deblurred image by a reconstruction module. The quantitative and qualitative experimental results on three public datasets, containing both synthetic and real blurred videos, show that the proposed network can achieve excellent video deblurring effect with less computational cost. Among them, on the GOPRO dataset, the PSNR reaches 31.43dB and the SSIM reaches 0.9201. © 2023 Institute of Computing Technology. All rights reserved.
引用
收藏
页码:1720 / 1730
页数:10
相关论文
共 47 条
  • [1] Zhang K H, Luo W H, Zhong Y R, Et al., Adversarial spatio-temporal learning for video deblurring, IEEE Transactions on Image Processing, 28, 1, pp. 291-301, (2019)
  • [2] Lee H S, Lee K M., Dense 3D reconstruction from severely blurred images using a single moving camera, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 273-280, (2013)
  • [3] Lee H S, Kwon J, Lee K M., Simultaneous localization, mapping and deblurring, Proceedings of the International Conference on Computer Vision, pp. 1203-1210, (2011)
  • [4] Wu Y, Ling H B, Yu J Y, Et al., Blurred target tracking by blur-driven tracker, Proceedings of the International Conference on Computer Vision, pp. 1100-1107, (2011)
  • [5] Cho S, Lee S., Fast motion deblurring, ACM Transactions on Graphics, 28, 5, pp. 1-8, (2009)
  • [6] Su S C, Delbracio M, Wang J, Et al., Deep video deblurring for hand-held cameras, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1279-1288, (2017)
  • [7] Zhong Z H, Gao Y, Zheng Y Q, Et al., Efficient spatio-temporal recurrent neural network for video deblurring, Proceedings of the 16th European Conference on Computer Vision, pp. 191-207, (2020)
  • [8] Scholkopf B, Platt J., Hofmann T., Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference, pp. 841-848, (2007)
  • [9] Bar L, Berkels B, Rumpf M, Et al., A variational framework for simultaneous motion estimation and restoration of motion-blurred video, Proceedings of the 11th IEEE International Conference on Computer Vision, pp. 1-8, (2007)
  • [10] Matsushita Y, Ofek E, Ge W N, Et al., Full-frame video stabilization with motion inpainting, IEEE Transactions on pattern analysis and Machine Intelligence, 28, 7, pp. 1150-1163, (2006)