Efficient Frequency Domain-based Transformers for High-Quality Image Deblurring

被引:90
作者
Kong, Lingshun [1 ]
Dong, Jiangxin [1 ]
Ge, Jianjun [2 ]
Li, Mingqiang [2 ]
Pan, Jinshan [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing, Peoples R China
[2] China Elect Technol Grp Corp, Informat Sci Acad, Beijing, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR | 2023年
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.00570
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present an effective and efficient method that explores the properties of Transformers in the frequency domain for high-quality image deblurring. Our method is motivated by the convolution theorem that the correlation or convolution of two signals in the spatial domain is equivalent to an element-wise product of them in the frequency domain. This inspires us to develop an efficient frequency domain-based self-attention solver (FSAS) to estimate the scaled dot-product attention by an element-wise product operation instead of the matrix multiplication in the spatial domain. In addition, we note that simply using the naive feed-forward network (FFN) in Transformers does not generate good deblurred results. To overcome this problem, we propose a simple yet effective discriminative frequency domain-based FFN (DFFN), where we introduce a gated mechanism in the FFN based on the Joint Photographic Experts Group (JPEG) compression algorithm to discriminatively determine which low- and high-frequency information of the features should be preserved for latent clear image restoration. We formulate the proposed FSAS and DFFN into an asymmetrical network based on an encoder and decoder architecture, where the FSAS is only used in the decoder module for better image deblurring. Experimental results show that the proposed method performs favorably against the state-of-the-art approaches.
引用
收藏
页码:5886 / 5895
页数:10
相关论文
共 35 条
[1]  
[Anonymous], 2022, CVPR, DOI DOI 10.1109/CVPR52688.2022.00564
[2]  
[Anonymous], 2019, J BIOMOL STRUCT DYN
[3]  
Carion N., 2020, P EUR C COMP VIS GLA, P213, DOI DOI 10.1007/978-3-030-58452-813
[4]  
Chen Hansheng, 2021, CVPR
[5]  
Chen Liangyu, 2022, ECCV
[6]  
Chu Xiaojie, 2022, ECCV
[7]   DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring [J].
Dong, Jiangxin ;
Roth, Stefan ;
Schiele, Bernt .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) :9960-9976
[8]   Learning Spatially Variant Linear Representation Models for Joint Filtering [J].
Dong, Jiangxin ;
Pan, Jinshan ;
Ren, Jimmy S. ;
Lin, Liang ;
Tang, Jinhui ;
Yang, Ming-Hsuan .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) :8355-8370
[9]  
Dong Jiangxin, 2021, CVPR
[10]  
GAO HY, 2019, CVPR, P3843, DOI DOI 10.1109/CVPR.2019.00397