Magic ELF: Image Deraining Meets Association Learning and Transformer

被引:15
|
作者
Jiang, Kui [1 ]
Wang, Zhongyuan [1 ]
Chen, Chen [2 ]
Wang, Zheng [1 ]
Cui, Laizhong [3 ]
Lin, Chia-Wen [4 ]
机构
[1] Wuhan Univ, NERCMS, Wuhan, Peoples R China
[2] Univ Cent Florida, CRCV, Orlando, FL 32816 USA
[3] Shenzhen Univ, Hangzhou, Peoples R China
[4] Natl Tsing Hua Univ, Hsinchu, Taiwan
基金
中国国家自然科学基金;
关键词
Image Deraining; Self-attention; Association Learning; QUALITY ASSESSMENT; RAIN REMOVAL; NETWORK;
D O I
10.1185/3503161.3547760
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Convolutional neural network (CNN) and Transformer have achieved great success in multimedia applications. However, little effort has been made to effectively and efficiently harmonize these two architectures to satisfy image deraining. This paper aims to unify these two architectures to take advantage of their learning merits for image deraining. In particular, the local connectivity and translation equivariance of CNN and the global aggregation ability of self-attention (SA) in Transformer are fully exploited for specific local context and global structure representations. Based on the observation that rain distribution reveals the degradation location and degree, we introduce degradation prior to help background recovery and accordingly present the association refinement deraining scheme. A novel multi-input attention module (MAM) is proposed to associate rain perturbation removal and background recovery. Moreover, we equip our model with effective depth-wise separable convolutions to learn the specific feature representations and trade off computational complexity. Extensive experiments show that our proposed method (dubbed as ELF) outperforms the state-of-the-art approach (MPRNet) by 0.25 dB on average, but only accounts for 11.7% and 42.1% of its computational cost and parameters.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning A Sparse Transformer Network for Effective Image Deraining
    Chen, Xiang
    Li, Hao
    Li, Mingqiang
    Pan, Jinshan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 5896 - 5905
  • [2] When dual contrastive learning meets disentangled features for unpaired image deraining
    Wang, Tianming
    Wang, Kaige
    Li, Qing
    MACHINE VISION AND APPLICATIONS, 2023, 34 (05)
  • [3] When dual contrastive learning meets disentangled features for unpaired image deraining
    Tianming Wang
    Kaige Wang
    Qing Li
    Machine Vision and Applications, 2023, 34
  • [4] Alternating attention Transformer for single image deraining
    Yang, Dawei
    He, Xin
    Zhang, Ruiheng
    DIGITAL SIGNAL PROCESSING, 2023, 141
  • [5] Image Deraining Transformer with Sparsity and Frequency Guidance
    Song, Tianyu
    Li, Pengpeng
    Jin, Guiyue
    Jin, Jiyu
    Fan, Shumin
    Chen, Xiang
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1889 - 1894
  • [6] RainFormer: a pyramid transformer for single image deraining
    Hao Yang
    Dongming Zhou
    Jinde Cao
    Qian Zhao
    Miao Li
    The Journal of Supercomputing, 2023, 79 : 6115 - 6140
  • [7] RainFormer: a pyramid transformer for single image deraining
    Yang, Hao
    Zhou, Dongming
    Cao, Jinde
    Zhao, Qian
    Li, Miao
    JOURNAL OF SUPERCOMPUTING, 2023, 79 (06): : 6115 - 6140
  • [8] A Single Image Deraining Algorithm Based on Swin Transformer
    Gao T.
    Wen Y.
    Chen T.
    Zhang J.
    Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, 2023, 57 (05): : 613 - 623
  • [9] DeTformer: A Novel Efficient Transformer Framework for Image Deraining
    Ragini, Thatikonda
    Prakash, Kodali
    Cheruku, Ramalingaswamy
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2024, 43 (02) : 1030 - 1052
  • [10] DeTformer: A Novel Efficient Transformer Framework for Image Deraining
    Thatikonda Ragini
    Kodali Prakash
    Ramalingaswamy Cheruku
    Circuits, Systems, and Signal Processing, 2024, 43 : 1030 - 1052