Mixed Attention Densely Residual Network for Single Image Super-Resolution

被引:3
|
作者
Zhou, Jingjun [1 ,2 ]
Liu, Jing [3 ]
Li, Jingbing [1 ,2 ]
Huang, Mengxing [1 ,2 ]
Cheng, Jieren [4 ]
Chen, Yen-Wei [5 ]
Xu, Yingying [3 ,6 ]
Nawaz, Saqib Ali [1 ]
机构
[1] Hainan Univ, Sch Informat & Commun Engn, Haikou 570228, Hainan, Peoples R China
[2] Hainan Univ, State Key Lab Marine Resource Utilizat South Chin, Haikou 570228, Hainan, Peoples R China
[3] Zhejiang Lab, Res Ctr Healthcare Data Sci, Hangzhou 311121, Peoples R China
[4] Hainan Univ, Sch Comp Sci & Cyberspace Secur, Haikou 570228, Hainan, Peoples R China
[5] Ritsumeikan Univ, Grad Sch Informat Sci & Engn, Kyoto 5258577, Japan
[6] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 311100, Peoples R China
来源
基金
海南省自然科学基金;
关键词
Channel attention; Laplacian spatial attention; residual in dense; mixed attention; RETRIEVAL;
D O I
10.32604/csse.2021.016633
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recent applications of convolutional neural networks (CNNs) in single image super-resolution (SISR) have achieved unprecedented performance. How-ever, existing CNN-based SISR network structure design consider mostly only channel or spatial information, and cannot make full use of both channel and spa-tial information to improve SISR performance further. The present work addresses this problem by proposing a mixed attention densely residual network architecture that can make full and simultaneous use of both channel and spatial information. Specifically, we propose a residual in dense network structure composed of dense connections between multiple dense residual groups to form a very deep network. This structure allows each dense residual group to apply a local residual skip con-nection and enables the cascading of multiple residual blocks to reuse previous features. A mixed attention module is inserted into each dense residual group, to enable the algorithm to fuse channel attention with laplacian spatial attention effectively, and thereby more adaptively focus on valuable feature learning. The qualitative and quantitative results of extensive experiments have demon-strate that the proposed method has a comparable performance with other state-of-the-art methods.
引用
收藏
页码:133 / 146
页数:14
相关论文
共 50 条
  • [1] Densely convolutional attention network for image super-resolution
    Bai, Furui
    Lu, Wen
    Huang, Yuanfei
    Zha, Lin
    Yang, Jiachen
    NEUROCOMPUTING, 2019, 368 : 25 - 33
  • [2] Efficient residual attention network for single image super-resolution
    Fangwei Hao
    Taiping Zhang
    Linchang Zhao
    Yuanyan Tang
    Applied Intelligence, 2022, 52 : 652 - 661
  • [3] Efficient residual attention network for single image super-resolution
    Hao, Fangwei
    Zhang, Taiping
    Zhao, Linchang
    Tang, Yuanyan
    APPLIED INTELLIGENCE, 2022, 52 (01) : 652 - 661
  • [4] Dual Reconstruction with Densely Connected Residual Network for Single Image Super-Resolution
    Hsu, Chih-Chung
    Lin, Chia-Hsiang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3643 - 3650
  • [5] Cascaded Residual Densely Connected Network for Image Super-Resolution
    Zou, Changjun
    Ye, Lintao
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2022, 16 (09): : 2882 - 2903
  • [6] Single-image super-resolution with multilevel residual attention network
    Qin, Ding
    Gu, Xiaodong
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (19): : 15615 - 15628
  • [7] Adaptive Residual Channel Attention Network for Single Image Super-Resolution
    Cao, Kerang
    Liu, Yuqing
    Duan, Lini
    Xie, Tian
    SCIENTIFIC PROGRAMMING, 2020, 2020
  • [8] RSAN: Residual Subtraction and Attention Network for Single Image Super-Resolution
    Wei, Shuo
    Sun, Xin
    Zhao, Haoran
    Dong, Junyu
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1890 - 1895
  • [9] Novel Channel Attention Residual Network for Single Image Super-Resolution
    Wenling Shi
    Huiqian Du
    Wenbo Mei
    JournalofBeijingInstituteofTechnology, 2020, 29 (03) : 345 - 353
  • [10] Single-image super-resolution with multilevel residual attention network
    Ding Qin
    Xiaodong Gu
    Neural Computing and Applications, 2020, 32 : 15615 - 15628