BAM: a balanced attention mechanism to optimize single image super-resolution

被引:3
|
作者
Wang, Fanyi [1 ]
Hu, Haotian [1 ]
Shen, Cheng [2 ]
Feng, Tianpeng [3 ]
Guo, Yandong [3 ]
机构
[1] Zhejiang Univ, Hangzhou 310027, Peoples R China
[2] CALTECH, Pasadena, CA 91125 USA
[3] OPPO Res Inst, Shenzhen, Peoples R China
关键词
Single image super-resolution; Texture aliasing; Inference acceleration; Lightweight attention mechanism;
D O I
10.1007/s11554-022-01235-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recovering texture information from the aliasing regions has always been a major challenge for single image super-resolution (SISR) task. These regions are often submerged in noise so that we have to restore texture details while suppressing noise. To address this issue, we propose an efficient Balanced Attention Mechanism (BAM), which consists of Avgpool Channel Attention Module (ACAM) and Maxpool Spatial Attention Module (MSAM) in parallel. ACAM is designed to suppress extreme noise in the large-scale feature maps, while MSAM preserves high-frequency texture details. Thanks to the parallel structure, these two modules not only conduct self-optimization, but also mutual optimization to obtain the balance of noise reduction and high-frequency texture restoration during the back propagation process, and the parallel structure makes the inference faster. To verify the effectiveness and robustness of BAM, we applied it to 10 state-of-the-art SISR networks. The results demonstrate that BAM can efficiently improve the networks' performance, and for those originally with attention mechanism, the substitution with BAM further reduces the amount of parameters and increases the inference speed. Information multi-distillation network (IMDN), a representative lightweight SISR network with attention, when the input image size is 200 x 200, the FPS of proposed IMDN-BAM precedes IMDN {8.1%, 8.7%, 8.8%} under the three SR magnifications of x 2, x 3, x 4, respectively. Densely residual Laplacian network (DRLN), a representative heavyweight SISR network with attention, when the scale is 60 x 60, the proposed DRLN-BAM is {11.0%, 8.8%, 10.1%} faster than DRLN under x 2, x 3, x 4. Moreover, we present a dataset with rich texture aliasing regions in real scenes, named realSR7. Experiments prove that BAM achieves better super-resolution results on the aliasing area.
引用
收藏
页码:941 / 955
页数:15
相关论文
共 50 条
  • [1] BAM: a balanced attention mechanism to optimize single image super-resolution
    Fanyi Wang
    Haotian Hu
    Cheng Shen
    Tianpeng Feng
    Yandong Guo
    Journal of Real-Time Image Processing, 2022, 19 : 941 - 955
  • [2] Upsampling Attention Network for Single Image Super-resolution
    Zheng, Zhijie
    Jiao, Yuhang
    Fang, Guangyou
    VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 4: VISAPP, 2021, : 399 - 406
  • [3] Global Learnable Attention for Single Image Super-Resolution
    Su, Jian-Nan
    Gan, Min
    Chen, Guang-Yong
    Yin, Jia-Li
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 8453 - 8465
  • [4] SRGAT: Single Image Super-Resolution With Graph Attention Network
    Yan, Yanyang
    Ren, Wenqi
    Hu, Xiaobin
    Li, Kun
    Shen, Haifeng
    Cao, Xiaochun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4905 - 4918
  • [5] PYRAMID FUSION ATTENTION NETWORK FOR SINGLE IMAGE SUPER-RESOLUTION
    He, Hao
    Du, Zongcai
    Li, Wenfeng
    Tang, Jie
    Wu, Gangshan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2165 - 2169
  • [6] Improving Single-Image Super-Resolution with Dilated Attention
    Zhang, Xinyu
    Cheng, Boyuan
    Yang, Xiaosong
    Xiao, Zhidong
    Zhang, Jianjun
    You, Lihua
    ELECTRONICS, 2024, 13 (12)
  • [7] Window Attention with Multiple Patterns for Single Image Super-Resolution
    Xiao, Xianwei
    Zhong, Baojiang
    2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2023, : 731 - 738
  • [8] Single image super-resolution based on directional variance attention network
    Behjati, Parichehr
    Rodriguez, Pau
    Fernandez, Carles
    Hupont, Isabelle
    Mehri, Armin
    Gonzalez, Jordi
    PATTERN RECOGNITION, 2023, 133
  • [9] Pyramid Separable Channel Attention Network for Single Image Super-Resolution
    Ma, Congcong
    Mi, Jiaqi
    Gao, Wanlin
    Tao, Sha
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (03): : 4687 - 4701
  • [10] SINGLE IMAGE SUPER-RESOLUTION VIA RESIDUAL NEURON ATTENTION NETWORKS
    Ai, Wenjie
    Tu, Xiaoguang
    Cheng, Shilei
    Xie, Mei
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1586 - 1590