EHNet: Efficient Hybrid Network with Dual Attention for Image Deblurring

被引:1
|
作者
Ho, Quoc-Thien [1 ]
Duong, Minh-Thien [2 ]
Lee, Seongsoo [3 ]
Hong, Min-Cheol [4 ]
机构
[1] Soongsil Univ, Dept Informat & Telecommun Engn, Seoul 06978, South Korea
[2] Ho Chi Minh City Univ Technol & Educ, Dept Automat Control, Ho Chi Minh City 70000, Vietnam
[3] Soongsil Univ, Dept Intelligent Semicond, Seoul 06978, South Korea
[4] Soongsil Univ, Sch Elect Engn, Seoul 06978, South Korea
关键词
convolution neural networks; dual attention module; hybrid architecture; image deblurring; motion blur; Transformer; TRANSFORMER;
D O I
10.3390/s24206545
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The motion of an object or camera platform makes the acquired image blurred. This degradation is a major reason to obtain a poor-quality image from an imaging sensor. Therefore, developing an efficient deep-learning-based image processing method to remove the blur artifact is desirable. Deep learning has recently demonstrated significant efficacy in image deblurring, primarily through convolutional neural networks (CNNs) and Transformers. However, the limited receptive fields of CNNs restrict their ability to capture long-range structural dependencies. In contrast, Transformers excel at modeling these dependencies, but they are computationally expensive for high-resolution inputs and lack the appropriate inductive bias. To overcome these challenges, we propose an Efficient Hybrid Network (EHNet) that employs CNN encoders for local feature extraction and Transformer decoders with a dual-attention module to capture spatial and channel-wise dependencies. This synergy facilitates the acquisition of rich contextual information for high-quality image deblurring. Additionally, we introduce the Simple Feature-Embedding Module (SFEM) to replace the pointwise and depthwise convolutions to generate simplified embedding features in the self-attention mechanism. This innovation substantially reduces computational complexity and memory usage while maintaining overall performance. Finally, through comprehensive experiments, our compact model yields promising quantitative and qualitative results for image deblurring on various benchmark datasets.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] Wide Receptive Field and Channel Attention Network for JPEG Compressed Image Deblurring
    Lee, Donghyeon
    Lee, Chulhee
    Kim, Taesung
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 304 - 313
  • [32] Dual residual attention network for image denoising
    Wu, Wencong
    Liu, Shijie
    Xia, Yuelong
    Zhang, Yungang
    PATTERN RECOGNITION, 2024, 149
  • [33] Hybrid-Domain Attention Dense Network for Efficient Image Super-Resolution
    He, Yanyi
    He, Jinhong
    Xue, Minglong
    Zhong, Senming
    Zhou, Mingliang
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2025,
  • [34] Iterative Dual CNNs for Image Deblurring
    Wang, Jinbin
    Wang, Ziqi
    Yang, Aiping
    MATHEMATICS, 2022, 10 (20)
  • [35] DUAL DEBLURRING LEVERAGED BY IMAGE MATCHING
    Wang, Fang
    Li, Tianxing
    Li, Yi
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 567 - 571
  • [36] Dual Image Deblurring Using Deep Image Prior
    Shin, Chang Jong
    Lee, Tae Bok
    Heo, Yong Seok
    ELECTRONICS, 2021, 10 (17)
  • [37] Blind Attention Geometric Restraint Neural Network for Single Image Dynamic/Defocus Deblurring
    Zhang, Jie
    Zhai, Wanming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8404 - 8417
  • [38] A multi-path attention network for non-uniform blind image deblurring
    Qi, Qing
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (24) : 36909 - 36928
  • [39] Dual encoder network with efficient channel attention refinement module for image splicing forgery detection
    Tan, Xiangqiong
    Zhang, Hongyi
    Wang, Zuoshuai
    Tang, Jun
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (05)
  • [40] A multi-path attention network for non-uniform blind image deblurring
    Qing Qi
    Multimedia Tools and Applications, 2023, 82 : 36909 - 36928