Hierarchical accumulation network with grid attention for image super-resolution

被引:10
|
作者
Yang, Yue [1 ]
Qi, Yong [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Comp Sci & Technol, Xian, Shaanxi, Peoples R China
关键词
Image super-resolution; Grouping; Attention mechanism; Accumulation network;
D O I
10.1016/j.knosys.2021.107520
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep convolutional neural networks (CNNs) have recently shown promising results in single image super-resolution (SISR) due to their powerful representation ability. However, existing CNN-based SR methods mainly focus on deeper architecture design to obtain high-level semantic information, neglecting the features of intermediate layers containing fine-grained texture information and thus limiting the capacity for producing precise high-resolution images. To tackle this issue, we propose a hierarchical accumulation network (HAN) with grid attention in this paper. Specifically, a hierarchical feature accumulation (HFA) structure is proposed to accumulate outputs of intermediate layers in a grouping manner for exploiting the features of different semantic levels. Moreover, we introduce a multi-scale grid attention module (MGAM) to refine features of the same level. The MGAM employs a pyramid sampling with self-attention mechanism to efficiently model the non-local dependencies between pixel features and produces refined representations. By this means, the universal features in connection with spatial similarity and semantic levels are produced for image SR. Experimental results on five benchmark datasets with different degradation models demonstrate the superiority of our HAN in terms of quantitative metrics and visual quality. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Single Image Super-Resolution Using Deep Hierarchical Attention Network
    Zhao, Fei
    Chen, Rui
    Li, Yuan
    PROCEEDINGS OF 2020 5TH INTERNATIONAL CONFERENCE ON MULTIMEDIA AND IMAGE PROCESSING (ICMIP 2020), 2020, : 80 - 85
  • [2] Attention mechanism feedback network for image super-resolution
    Chen, Xiao
    Jing, Ruyun
    Suna, Chaowen
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (04)
  • [3] A sparse lightweight attention network for image super-resolution
    Hongao Zhang
    Jinsheng Fang
    Siyu Hu
    Kun Zeng
    The Visual Computer, 2024, 40 (2) : 1261 - 1272
  • [4] A sparse lightweight attention network for image super-resolution
    Zhang, Hongao
    Fang, Jinsheng
    Hu, Siyu
    Zeng, Kun
    VISUAL COMPUTER, 2024, 40 (02): : 1261 - 1272
  • [5] Adaptive Attention Network for Image Super-resolution
    Chen Y.-M.
    Zhou D.-W.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (08): : 1950 - 1960
  • [6] Information-Growth Attention Network for Image Super-Resolution
    Li, Zhuangzi
    Li, Ge
    Li, Thomas
    Liu, Shan
    Gao, Wei
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 544 - 552
  • [7] Channel attention and residual concatenation network for image super-resolution
    Cai T.-J.
    Peng X.-Y.
    Shi Y.-P.
    Huang J.
    Peng, Xiao-Yu (pengxy96@qq.com), 1600, Chinese Academy of Sciences (29): : 142 - 151
  • [8] Image super-resolution reconstruction based on dynamic attention network
    Zhao X.-Q.
    Wang Z.
    Song Z.-Y.
    Jiang H.-M.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2023, 57 (08): : 1487 - 1494
  • [9] CANS: Combined Attention Network for Single Image Super-Resolution
    Muhammad, Wazir
    Aramvith, Supavadee
    Onoye, Takao
    IEEE ACCESS, 2024, 12 : 167498 - 167517
  • [10] Residual shuffle attention network for image super-resolution
    Li, Xuanyi
    Shao, Zhuhong
    Li, Bicao
    Shang, Yuanyuan
    Wu, Jiasong
    Duan, Yuping
    MACHINE VISION AND APPLICATIONS, 2023, 34 (05)