BAM: a balanced attention mechanism to optimize single image super-resolution

被引:3
|
作者
Wang, Fanyi [1 ]
Hu, Haotian [1 ]
Shen, Cheng [2 ]
Feng, Tianpeng [3 ]
Guo, Yandong [3 ]
机构
[1] Zhejiang Univ, Hangzhou 310027, Peoples R China
[2] CALTECH, Pasadena, CA 91125 USA
[3] OPPO Res Inst, Shenzhen, Peoples R China
关键词
Single image super-resolution; Texture aliasing; Inference acceleration; Lightweight attention mechanism;
D O I
10.1007/s11554-022-01235-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recovering texture information from the aliasing regions has always been a major challenge for single image super-resolution (SISR) task. These regions are often submerged in noise so that we have to restore texture details while suppressing noise. To address this issue, we propose an efficient Balanced Attention Mechanism (BAM), which consists of Avgpool Channel Attention Module (ACAM) and Maxpool Spatial Attention Module (MSAM) in parallel. ACAM is designed to suppress extreme noise in the large-scale feature maps, while MSAM preserves high-frequency texture details. Thanks to the parallel structure, these two modules not only conduct self-optimization, but also mutual optimization to obtain the balance of noise reduction and high-frequency texture restoration during the back propagation process, and the parallel structure makes the inference faster. To verify the effectiveness and robustness of BAM, we applied it to 10 state-of-the-art SISR networks. The results demonstrate that BAM can efficiently improve the networks' performance, and for those originally with attention mechanism, the substitution with BAM further reduces the amount of parameters and increases the inference speed. Information multi-distillation network (IMDN), a representative lightweight SISR network with attention, when the input image size is 200 x 200, the FPS of proposed IMDN-BAM precedes IMDN {8.1%, 8.7%, 8.8%} under the three SR magnifications of x 2, x 3, x 4, respectively. Densely residual Laplacian network (DRLN), a representative heavyweight SISR network with attention, when the scale is 60 x 60, the proposed DRLN-BAM is {11.0%, 8.8%, 10.1%} faster than DRLN under x 2, x 3, x 4. Moreover, we present a dataset with rich texture aliasing regions in real scenes, named realSR7. Experiments prove that BAM achieves better super-resolution results on the aliasing area.
引用
收藏
页码:941 / 955
页数:15
相关论文
共 50 条
  • [21] Deep and adaptive feature extraction attention network for single image super-resolution
    Lin, Jianpu
    Liao, Lizhao
    Lin, Shanling
    Lin, Zhixian
    Guo, Tailiang
    JOURNAL OF THE SOCIETY FOR INFORMATION DISPLAY, 2024, 32 (01) : 23 - 33
  • [22] Deep recurrent residual channel attention network for single image super-resolution
    Liu, Yepeng
    Yang, Dezhi
    Zhang, Fan
    Xie, Qingsong
    Zhang, Caiming
    VISUAL COMPUTER, 2024, 40 (05) : 3441 - 3456
  • [23] Residual Adaptive Dense Weight Attention Network for Single Image Super-Resolution
    Chen, Jiacheng
    Wang, Wanliang
    Xing, Fangsen
    Qian, Yutong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [24] Multi-feature fusion attention network for single image super-resolution
    Chen, Jiacheng
    Wang, Wanliang
    Xing, Fangsen
    Tu, Hangyao
    IET IMAGE PROCESSING, 2023, 17 (05) : 1389 - 1402
  • [25] SINGLE IMAGE SUPER-RESOLUTION VIA GLOBAL-CONTEXT ATTENTION NETWORKS
    Bian, Pengcheng
    Zheng, Zhonglong
    Zhang, Dawei
    Chen, Liyuan
    Li, Minglu
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1794 - 1798
  • [26] Attention augmented multi-scale network for single image super-resolution
    Chengyi Xiong
    Xiaodi Shi
    Zhirong Gao
    Ge Wang
    Applied Intelligence, 2021, 51 : 935 - 951
  • [27] Interpretable Detail-Fidelity Attention Network for Single Image Super-Resolution
    Huang, Yuanfei
    Li, Jie
    Gao, Xinbo
    Hu, Yanting
    Lu, Wen
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 2325 - 2339
  • [28] Single Image Super-Resolution with Sequential Multi-axis Blocked Attention
    Yang, Bincheng
    Wu, Gangshan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 136 - 148
  • [29] Attention augmented multi-scale network for single image super-resolution
    Xiong, Chengyi
    Shi, Xiaodi
    Gao, Zhirong
    Wang, Ge
    APPLIED INTELLIGENCE, 2021, 51 (02) : 935 - 951
  • [30] Pixel attention convolutional network for image super-resolution
    Wang, Xin
    Zhang, Shufen
    Lin, Yuanyuan
    Lyu, Yanxia
    Zhang, Jiale
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (11) : 8589 - 8599