STD-Net: Spatio-Temporal Decomposition Network for Video Demoiring With Sparse Transformers

被引:0
作者
Niu, Yuzhen [1 ,2 ]
Xu, Rui [1 ,2 ]
Lin, Zhihua [3 ]
Liu, Wenxi [1 ,2 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fujian Key Lab Network Comp & Intelligent Informa, Fuzhou 350108, Peoples R China
[2] Minist Educ, Engn Res Ctr Bigdata Intelligence, Fuzhou 350108, Peoples R China
[3] Res Inst Alipay Informat Technol Co Ltd, Hangzhou 310000, Peoples R China
基金
中国国家自然科学基金;
关键词
Image restoration; video demoireing; video restoration; spatio-temporal network; sparse transformer; QUALITY ASSESSMENT; IMAGE;
D O I
10.1109/TCSVT.2024.3386604
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The problem of video demoireing is a new challenge in video restoration. Unlike image demoireing, which involves removing static and uniform patterns, video demoireing requires tackling dynamic and varied moire patterns while maintaining video details, colors, and temporal consistency. It is particularly challenging to model moire patterns for videos with camera or object motions, where separating moire from the original video content across frames is extremely difficult. Nonetheless, we observe that the spatial distribution of moire patterns is often sparse on each frame, and their long-range temporal correlation is not significant. To fully leverage this phenomenon, a sparsity-constrained spatial self-attention scheme is proposed to concentrate on removing sparse moire efficiently for each frame without being distracted by dynamic video content. The frame-wise spatial features are then correlated and aggregated via the local temporal cross-frame-attention module to produce temporal-consistent high-quality moire-free videos. The above decoupled spatial and temporal transformers constitute the Spatio-Temporal Decomposition Network, dubbed STD-Net. For evaluation, we present a large-scale video demoireing benchmark featuring various real-life scenes, camera motions, and object motions. We demonstrate that our proposed model can effectively and efficiently achieve superior performance on video demoireing and single image demoireing tasks. The proposed dataset is released at https://github.com/FZU-N/LVDM.
引用
收藏
页码:8562 / 8575
页数:14
相关论文
共 60 条
[1]   Sub-Nyquist artefacts and sampling moire effects [J].
Amidror, Isaac .
ROYAL SOCIETY OPEN SCIENCE, 2015, 2 (03)
[2]  
Ba J. L., 2016, arXiv, DOI DOI 10.48550/ARXIV.1607.06450
[3]  
Bayer B.E., 1976, Color Imaging Array, Patent No. [US3971065P, 3971065]
[4]  
Bin He, 2020, Computer Vision - ECCV 2020 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12367), P713, DOI 10.1007/978-3-030-58542-6_43
[5]  
Burger HC, 2012, PROC CVPR IEEE, P2392, DOI 10.1109/CVPR.2012.6247952
[6]   VDTR: Video Deblurring With Transformer [J].
Cao, Mingdeng ;
Fan, Yanbo ;
Zhang, Yong ;
Wang, Jue ;
Yang, Yujiu .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (01) :160-171
[7]  
Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
[8]  
Chan KCK, 2021, AAAI CONF ARTIF INTE, V35, P973
[9]   Pre-Trained Image Processing Transformer [J].
Chen, Hanting ;
Wang, Yunhe ;
Guo, Tianyu ;
Xu, Chang ;
Deng, Yiping ;
Liu, Zhenhua ;
Ma, Siwei ;
Xu, Chunjing ;
Xu, Chao ;
Gao, Wen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :12294-12305
[10]   Learning A Sparse Transformer Network for Effective Image Deraining [J].
Chen, Xiang ;
Li, Hao ;
Li, Mingqiang ;
Pan, Jinshan .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :5896-5905