Spatial relaxation transformer for image super-resolution

被引:1
|
作者
Li, Yinghua [1 ]
Zhang, Ying [1 ]
Zeng, Hao [3 ]
He, Jinglu [1 ]
Guo, Jie [2 ]
机构
[1] Xian Univ Posts & Telecommun, Xian Key Lab Image Proc Technol & Applicat Publ Se, Changan West St, Xian 710121, Shaanxi, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, 2 Southern Tai Bai Rd, Xian 710071, Shaanxi, Peoples R China
[3] Chinese Acad Sci, Inst Software, Beijing, Peoples R China
关键词
Super-resolution; Vision transformer; Feature aggregation; Image enhancement; Swin transformer;
D O I
10.1016/j.jksuci.2024.102150
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transformer-based approaches have demonstrated remarkable performance in image processing tasks due to their ability to model long-range dependencies. Current mainstream Transformer-based methods typically confine self-attention computation within windows to reduce computational burden. However, this constraint may lead to grid artifacts in the reconstructed images due to insufficient cross-window information exchange, particularly in image super-resolution tasks. To address this issue, we propose the Multi-Scale Texture Complementation Block based on Spatial Relaxation Transformer (MSRT), which leverages features at multiple scales and augments information exchange through cross windows attention computation. In addition, we introduce a loss function based on the prior of texture smoothness transformation, which utilizes the continuity of textures between patches to constrain the generation of more coherent texture information in the reconstructed images. Specifically, we employ learnable compressive sensing technology to extract shallow features from images, preserving image features while reducing feature dimensions and improving computational efficiency. Extensive experiments conducted on multiple benchmark datasets demonstrate that our method outperforms previous state-of-the-art approaches in both qualitative and quantitative evaluations.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] TCSR: Lightweight Transformer and CNN Interaction Network for Image Super-Resolution
    Cai, Danlin
    Tan, Wenwen
    Chen, Feiyang
    Lou, Xinchi
    Xiahou, Jianbin
    Zhu, Daxin
    Huang, Detian
    IEEE ACCESS, 2024, 12 : 174782 - 174795
  • [32] HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution
    Zhang, Xiang
    Zhang, Yulun
    Yu, Fisher
    COMPUTER VISION - ECCV 2024, PT XL, 2025, 15098 : 483 - 500
  • [33] DASR: Dual-Attention Transformer for infrared image super-resolution
    Liang, Shubo
    Song, Kechen
    Zhao, Wenli
    Li, Song
    Yan, Yunhui
    INFRARED PHYSICS & TECHNOLOGY, 2023, 133
  • [34] EdgeFormer: Edge-Aware Efficient Transformer for Image Super-Resolution
    Luo, Xiaotong
    Ai, Zekun
    Liang, Qiuyuan
    Xie, Yuan
    Shi, Zhongchao
    Fan, Jianping
    Qu, Yanyun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [35] A fused super-resolution network and a vision transformer for airfoil ice accretion image prediction
    Yu, Dinghao
    Han, Zhirong
    Zhang, Bin
    Zhang, Meihong
    Liu, Hong
    Chen, Yingchun
    AEROSPACE SCIENCE AND TECHNOLOGY, 2024, 144
  • [36] Research on Super-resolution of Image
    Zheng Genrang
    2011 AASRI CONFERENCE ON INFORMATION TECHNOLOGY AND ECONOMIC DEVELOPMENT (AASRI-ITED 2011), VOL 1, 2011, : 119 - 122
  • [37] Research on Super-resolution of Image
    Zheng Genrang
    2011 INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND NEURAL COMPUTING (FSNC 2011), VOL IV, 2011, : 119 - 122
  • [38] Super-resolution image pyramid
    Lu, Y
    Inamura, M
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2003, E86D (08) : 1436 - 1446
  • [39] Image super-resolution survey
    van Ouwerkerk, J. D.
    IMAGE AND VISION COMPUTING, 2006, 24 (10) : 1039 - 1052
  • [40] A Brief Analysis of the SwinIR Image Super-Resolution
    Ngoc-Long Nguyen
    IMAGE PROCESSING ON LINE, 2022, 12 : 582 - 589