Frequency-Separated Attention Network for Image Super-Resolution

被引:1
|
作者
Qu, Daokuan [1 ,2 ]
Li, Liulian [3 ]
Yao, Rui [3 ]
机构
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Peoples R China
[2] Shandong Polytech Coll, Sch Energy & Mat Engn, Jining 272067, Peoples R China
[3] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 10期
关键词
densely connected structure; frequency-separated; channel-wise and spatial attention; image super-resolution;
D O I
10.3390/app14104238
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The use of deep convolutional neural networks has significantly improved the performance of super-resolution. Employing deeper networks to enhance the non-linear mapping capability from low-resolution (LR) to high-resolution (HR) images has inadvertently weakened the information flow and disrupted long-term memory. Moreover, overly deep networks are challenging to train, thus failing to exhibit the expressive capability commensurate with their depth. High-frequency and low-frequency features in images play different roles in image super-resolution. Networks based on CNNs, which should focus more on high-frequency features, treat these two types of features equally. This results in redundant computations when processing low-frequency features and causes complex and detailed parts of the reconstructed images to appear as smooth as the background. To maintain long-term memory and focus more on the restoration of image details in networks with strong representational capabilities, we propose the Frequency-Separated Attention Network (FSANet), where dense connections ensure the full utilization of multi-level features. In the Feature Extraction Module (FEM), the use of the Res ASPP Module expands the network's receptive field without increasing its depth. To differentiate between high-frequency and low-frequency features within the network, we introduce the Feature-Separated Attention Block (FSAB). Furthermore, to enhance the quality of the restored images using heuristic features, we incorporate attention mechanisms into the Low-Frequency Attention Block (LFAB) and the High-Frequency Attention Block (HFAB) for processing low-frequency and high-frequency features, respectively. The proposed network outperforms the current state-of-the-art methods in tests on benchmark datasets.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] CANS: Combined Attention Network for Single Image Super-Resolution
    Muhammad, Wazir
    Aramvith, Supavadee
    Onoye, Takao
    IEEE ACCESS, 2024, 12 : 167498 - 167517
  • [22] DANS: Deep Attention Network for Single Image Super-Resolution
    Talreja, Jagrati
    Aramvith, Supavadee
    Onoye, Takao
    IEEE ACCESS, 2023, 11 : 84379 - 84397
  • [23] Efficient residual attention network for single image super-resolution
    Hao, Fangwei
    Zhang, Taiping
    Zhao, Linchang
    Tang, Yuanyan
    APPLIED INTELLIGENCE, 2022, 52 (01) : 652 - 661
  • [24] A novel attention-enhanced network for image super-resolution
    Bo, Yangyu
    Wu, Yongliang
    Wang, Xuejun
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 130
  • [25] Scale-Aware Frequency Attention network for super-resolution
    Yu, Wei
    Li, Zonglin
    Liu, Qinglin
    Jiang, Feng
    Guo, Changyong
    Zhang, Shengping
    NEUROCOMPUTING, 2023, 554
  • [26] Channel attention and residual concatenation network for image super-resolution
    Cai T.-J.
    Peng X.-Y.
    Shi Y.-P.
    Huang J.
    Peng, Xiao-Yu (pengxy96@qq.com), 1600, Chinese Academy of Sciences (29): : 142 - 151
  • [27] Single Image Super-Resolution with Arbitrary Magnification Based on High-Frequency Attention Network
    Yun, Jun-Seok
    Yoo, Seok-Bong
    MATHEMATICS, 2022, 10 (02)
  • [28] Residual Triplet Attention Network for Single-Image Super-Resolution
    Huang, Feng
    Wang, Zhifeng
    Wu, Jing
    Shen, Ying
    Chen, Liqiong
    ELECTRONICS, 2021, 10 (17)
  • [29] Edge-Enhanced with Feedback Attention Network for Image Super-Resolution
    Fu, Chunmei
    Yin, Yong
    SENSORS, 2021, 21 (06) : 1 - 16
  • [30] Deep Residual-Dense Attention Network for Image Super-Resolution
    Qin, Ding
    Gu, Xiaodong
    NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 3 - 10