Spatial relaxation transformer for image super-resolution

被引:1
|
作者
Li, Yinghua [1 ]
Zhang, Ying [1 ]
Zeng, Hao [3 ]
He, Jinglu [1 ]
Guo, Jie [2 ]
机构
[1] Xian Univ Posts & Telecommun, Xian Key Lab Image Proc Technol & Applicat Publ Se, Changan West St, Xian 710121, Shaanxi, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, 2 Southern Tai Bai Rd, Xian 710071, Shaanxi, Peoples R China
[3] Chinese Acad Sci, Inst Software, Beijing, Peoples R China
关键词
Super-resolution; Vision transformer; Feature aggregation; Image enhancement; Swin transformer;
D O I
10.1016/j.jksuci.2024.102150
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transformer-based approaches have demonstrated remarkable performance in image processing tasks due to their ability to model long-range dependencies. Current mainstream Transformer-based methods typically confine self-attention computation within windows to reduce computational burden. However, this constraint may lead to grid artifacts in the reconstructed images due to insufficient cross-window information exchange, particularly in image super-resolution tasks. To address this issue, we propose the Multi-Scale Texture Complementation Block based on Spatial Relaxation Transformer (MSRT), which leverages features at multiple scales and augments information exchange through cross windows attention computation. In addition, we introduce a loss function based on the prior of texture smoothness transformation, which utilizes the continuity of textures between patches to constrain the generation of more coherent texture information in the reconstructed images. Specifically, we employ learnable compressive sensing technology to extract shallow features from images, preserving image features while reducing feature dimensions and improving computational efficiency. Extensive experiments conducted on multiple benchmark datasets demonstrate that our method outperforms previous state-of-the-art approaches in both qualitative and quantitative evaluations.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Spatial and frequency information fusion transformer for image super-resolution
    Zhang, Yan
    Xu, Fujie
    Sun, Yemei
    Wang, Jiao
    NEURAL NETWORKS, 2025, 187
  • [2] Single Image Super-resolution Using Spatial Transformer Networks
    Wang, Qiang
    Fan, Huijie
    Cong, Yang
    Tang, Yandong
    2017 IEEE 7TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2017, : 564 - 567
  • [3] Spatial Transformer Generative Adversarial Network for Robust Image Super-Resolution
    Kasem, Hossam M.
    Hung, Kwok-Wai
    Jiang, Jianmin
    IEEE ACCESS, 2019, 7 : 182993 - 183009
  • [4] SSAformer: Spatial-Spectral Aggregation Transformer for Hyperspectral Image Super-Resolution
    Wang, Haoqian
    Zhang, Qi
    Peng, Tao
    Xu, Zhongjie
    Cheng, Xiangai
    Xing, Zhongyang
    Li, Teng
    REMOTE SENSING, 2024, 16 (10)
  • [5] RISTRA: Recursive Image Super-Resolution Transformer With Relativistic Assessment
    Zhou, Xiaoqiang
    Huang, Huaibo
    Wang, Zilei
    He, Ran
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6475 - 6487
  • [6] A Transformer-Based Model for Super-Resolution of Anime Image
    Xu, Shizhuo
    Dutta, Vibekananda
    He, Xin
    Matsumaru, Takafumi
    SENSORS, 2022, 22 (21)
  • [7] Efficient Swin Transformer for Remote Sensing Image Super-Resolution
    Kang, Xudong
    Duan, Puhong
    Li, Jier
    Li, Shutao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6367 - 6379
  • [8] Spstnet: image super-resolution using spatial pyramid swin transformer network
    Sun, Yemei
    Wang, Jiao
    Yang, Yue
    Zhang, Yan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (04)
  • [9] Efficient Dual Attention Transformer for Image Super-Resolution
    Park, Soobin
    Jeong, Yuna
    Choi, Yong Suk
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 963 - 970
  • [10] Efficient mixed transformer for single image super-resolution
    Zheng, Ling
    Zhu, Jinchen
    Shi, Jinpeng
    Weng, Shizhuang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133