Cross-Dimensional Attention Fusion Network for Simulated Single Image Super-Resolution

被引:0
|
作者
He, Jingbo [1 ]
He, Xiaohai [1 ]
Xiong, Shuhua [1 ]
Chen, Honggang [1 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Superresolution; Image reconstruction; Feature extraction; Degradation; Visualization; Task analysis; Super-resolution; cross-dimensional attention fusion mechanism; simulated SISR; optional training strategy;
D O I
10.1109/TBC.2024.3408643
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Single image super-resolution (SISR) is a task of reconstructing high-resolution (HR) images from low-resolution (LR) images, which are obtained by some degradation process. Deep neural networks (DNNs) have greatly advanced the frontier of image super-resolution research and replaced traditional methods as the de facto standard approach. The attention mechanism enables the SR algorithms to achieve breakthrough performance after another. However, limited research has been conducted on the interaction and integration of attention mechanisms across different dimensions. To tackle this issue, in this paper, we propose a cross-dimensional attention fusion network (CAFN) to effectively achieve cross-dimensional inter-action with long-range dependencies. Specifically, the proposed approach involves the utilization of a cross-dimensional aggrega-tion module (CAM) to effectively capture contextual information by integrating both spatial and channel importance maps. The design of information fusion module (IFM) in CAM serves as a bridge for parallel dual-attention information fusion. In addition, a novel memory-adaptive multi-stage (MAMS) training method is proposed. We perform warm-start retraining with the same setting as the previous stage, without increasing memory consumption. If the memory is sufficient, we finetune the model with a larger patch size after the warm-start. The experimental results definitively demonstrate the superior performance of our cross-dimensional attention fusion network and training strategy compared to state-of-the-art (SOTA) methods, as evidenced by both quantitative and qualitative metrics.
引用
收藏
页码:909 / 923
页数:15
相关论文
共 50 条
  • [41] Multi-attention augmented network for single image super-resolution
    Chen, Rui
    Zhang, Heng
    Liu, Jixin
    PATTERN RECOGNITION, 2022, 122
  • [42] Bilateral Upsampling Network for Single Image Super-Resolution With Arbitrary Scaling Factors
    Zhang, Menglei
    Ling, Qiang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4395 - 4408
  • [43] Dual-path attention network for single image super-resolution
    Huang, Zhiyong
    Li, Wenbin
    Li, Jinxin
    Zhou, Dengwen
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 169
  • [44] CVANet: Cascaded visual attention network for single image super-resolution
    Zhang, Weidong
    Zhao, Wenyi
    Li, Jia
    Zhuang, Peixian
    Sun, Haihan
    Xu, Yibo
    Li, Chongyi
    NEURAL NETWORKS, 2024, 170 : 622 - 634
  • [45] Rectified Binary Network for Single-Image Super-Resolution
    Xin, Jingwei
    Wang, Nannan
    Jiang, Xinrui
    Li, Jie
    Wang, Xiaoyu
    Gao, Xinbo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [46] Novel Channel Attention Residual Network for Single Image Super-Resolution
    Shi W.
    Du H.
    Mei W.
    Journal of Beijing Institute of Technology (English Edition), 2020, 29 (03): : 345 - 353
  • [47] Edge-Aware Attention Transformer for Image Super-Resolution
    Wang, Haoqian
    Xing, Zhongyang
    Xu, Zhongjie
    Cheng, Xiangai
    Li, Teng
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2905 - 2909
  • [48] FSFN: feature separation and fusion network for single image super-resolution
    Zhu, Kai
    Chen, Zhenxue
    Wu, Q. M. Jonathan
    Wang, Nannan
    Zhao, Jie
    Zhang, Gan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (21-23) : 31599 - 31618
  • [49] FSFN: feature separation and fusion network for single image super-resolution
    Kai Zhu
    Zhenxue Chen
    Q. M. Jonathan Wu
    Nannan Wang
    Jie Zhao
    Gan Zhang
    Multimedia Tools and Applications, 2021, 80 : 31599 - 31618
  • [50] Multi-attention fusion transformer for single-image super-resolution
    Li, Guanxing
    Cui, Zhaotong
    Li, Meng
    Han, Yu
    Li, Tianping
    SCIENTIFIC REPORTS, 2024, 14 (01):