Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring

被引:2
作者
Zhang, Huicong [1 ]
Xie, Haozhe [2 ]
Yao, Hongxun [1 ]
机构
[1] Harbin Inst Technol, Harbin, Peoples R China
[2] Nanyang Technol Univ, S Lab, Singapore, Singapore
来源
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024 | 2024年
关键词
D O I
10.1109/CVPR52733.2024.00258
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video deblurring relies on leveraging information from other frames in the video sequence to restore the blurred regions in the current frame. Mainstream approaches employ bidirectional feature propagation, spatio-temporal transformers, or a combination of both to extract information from the video sequence. However, limitations in memory and computational resources constraints the temporal window length of the spatio-temporal transformer, preventing the extraction of longer temporal contextual information from the video sequence. Additionally, bidirectional feature propagation is highly sensitive to inaccurate optical flow in blurry frames, leading to error accumulation during the propagation process. To address these issues, we propose BSSTNet, Blur-aware Spatio-temporal Sparse Transformer Network. It introduces the blur map, which converts the originally dense attention into a sparse form, enabling a more extensive utilization of information throughout the entire video sequence. Specifically, BSSTNet (1) uses a longer temporal window in the transformer, leveraging information from more distant frames to restore the blurry pixels in the current frame. (2) introduces bidirectional feature propagation guided by blur maps, which reduces error accumulation caused by the blur frame. The experimental results demonstrate the proposed BSSTNet outperforms the state-of-the-art methods on the GoPro and DVD datasets.
引用
收藏
页码:2673 / 2681
页数:9
相关论文
共 28 条
  • [11] Liang Jingyun, 2022, ARXIV
  • [12] Liang Jingyun, 2022, NEURIPS
  • [13] Lin Jing, 2022, ICML
  • [14] Liu Rui, 2021, ICCV
  • [15] Full-frame video stabilization with motion inpainting
    Matsushita, Y
    Ofek, E
    Ge, WN
    Tang, X
    Shum, HY
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (07) : 1150 - 1163
  • [16] Mei Christopher, 2008, CVPR
  • [17] Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring
    Nah, Seungjun
    Kim, Tae Hyun
    Lee, Kyoung Mu
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 257 - 265
  • [18] Nah Seungjun, 2019, CVPR
  • [19] Paszke A, 2019, ADV NEUR IN, V32
  • [20] SU S, 2017, CVPR, P1206, DOI DOI 10.1109/CVPR.2017.133