Efficient Spatio-Temporal Feature Extraction Recurrent Neural Network for Video Deblurring

被引:0
|
作者
Pu Z. [1 ]
Ma W. [1 ]
Mi Q. [1 ]
机构
[1] Faculty of Information Technology, Beijing University of Technology, Beijing
关键词
attention mechanism; feature extraction; recurrent neural network; video deblurring;
D O I
10.3724/SP.J.1089.2023.19685
中图分类号
学科分类号
摘要
Considering that existing recurrent neural network-based video deblurring methods are limited in cross-frame feature aggregation and computational efficiency, an efficient spatio-temporal feature extraction recurrent neural network is proposed. Firstly, we combine a residual dense module with the channel attention mechanism to efficiently extract discriminative features from each frame of a given sequence. Then, a spatio-temporal feature enhancement and fusion module is proposed to select features from the highly redundant and interfering sequential features and integrate them into the features of the current frame. Finally, the enhanced features of the current frame are converted into the deblurred image by a reconstruction module. The quantitative and qualitative experimental results on three public datasets, containing both synthetic and real blurred videos, show that the proposed network can achieve excellent video deblurring effect with less computational cost. Among them, on the GOPRO dataset, the PSNR reaches 31.43dB and the SSIM reaches 0.9201. © 2023 Institute of Computing Technology. All rights reserved.
引用
收藏
页码:1720 / 1730
页数:10
相关论文
共 47 条
  • [11] Cho S, Wang J, Lee S., Video deblurring for hand-held cameras using patch-based synthesis, ACM Transactions on Graphics, 31, 4, (2012)
  • [12] Wulff J, Black M J., Modeling blurred video with layers, Proceedings of the 13th European Conference on Computer Vision, pp. 236-252, (2014)
  • [13] Kim T H, Lee K M., Generalized video deblurring for dynamic scenes, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5426-5434, (2015)
  • [14] Kim T H, Nah S, Lee K M., Dynamic video deblurring using a locally adaptive blur model, IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 10, pp. 2374-2387, (2018)
  • [15] Kim T H, Ahn B, Lee K M., Dynamic scene deblurring, Proceedings of the IEEE International Conference on Computer Vision, pp. 3160-3167, (2013)
  • [16] Kim T H, Lee K M., Segmentation-free dynamic scene deblurring, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2766-2773, (2014)
  • [17] Zhang H C, Wipf D, Zhang Y N., Multi-observation blind deconvolution with an adaptive sparse prior, IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 8, pp. 1628-1643, (2014)
  • [18] Krishnan D, Tay T, Fergus R., Blind deconvolution using a normalized sparsity measure, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 233-240, (2011)
  • [19] Sun J, Cao W F, Xu Z B, Et al., Learning a convolutional neural network for non-uniform motion blur removal, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 769-777, (2015)
  • [20] Pan J S, Bai H R, Tang J H., Cascaded deep video deblurring using temporal sharpness prior, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3043-3051, (2020)