Uformer-ICS: A U-Shaped Transformer for Image Compressive Sensing Service

被引:0
作者
Zhang, Kuiyuan [1 ]
Hua, Zhongyun [1 ]
Li, Yuanman [2 ,3 ]
Zhang, Yushu [4 ]
Zhou, Yicong [5 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Guangdong, Peoples R China
[2] Guangdong Prov Key Lab Novel Secur Intelligence Te, Shenzhen 518055, Guangdong, Peoples R China
[3] Shenzhen Univ, Coll Elect & Informat Engn, Guangdong Key Lab Intelligent Informat Proc, Shenzhen 518060, Peoples R China
[4] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 210016, Peoples R China
[5] Univ Macau, Dept Comp & Informat Sci, Macau 999078, Peoples R China
基金
中国国家自然科学基金;
关键词
Image reconstruction; Transformers; Task analysis; Computer architecture; Image coding; Iterative methods; Extraterrestrial measurements; Compressive sensing service; compressive sampling; image reconstruction; adaptive sampling; deep learning; RECONSTRUCTION; ALGORITHMS;
D O I
10.1109/TSC.2023.3334446
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many service computing applications require real-time dataset collection from multiple devices, necessitating efficient sampling techniques to reduce bandwidth and storage pressure. Compressive sensing (CS) has found wide-ranging applications in image acquisition and reconstruction. Recently, numerous deep-learning methods have been introduced for CS tasks. However, the accurate reconstruction of images from measurements remains a significant challenge, especially at low sampling rates. In this article, we propose Uformer-ICS as a novel U-shaped transformer for image CS tasks by introducing inner characteristics of CS into transformer architecture. To utilize the uneven sparsity distribution of image blocks, we design an adaptive sampling architecture that allocates measurement resources based on the estimated block sparsity, allowing the compressed results to retain maximum information from the original image. Additionally, we introduce a multi-channel projection (MCP) module inspired by traditional CS optimization methods. By integrating the MCP module into the transformer blocks, we construct projection-based transformer blocks, and then form a symmetrical reconstruction model using these blocks and residual convolutional blocks. Therefore, our reconstruction model can simultaneously utilize the local features and long-range dependencies of image, and the prior projection knowledge of CS theory. Experimental results demonstrate its significantly better reconstruction performance than state-of-the-art deep learning-based CS methods.
引用
收藏
页码:2974 / 2988
页数:15
相关论文
共 39 条
  • [1] From Patch to Pixel: A Transformer-Based Hierarchical Framework for Compressive Image Sensing
    Gan, Hongping
    Shen, Minghe
    Hua, Yi
    Ma, Chunyan
    Zhang, Tao
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2023, 9 : 133 - 146
  • [2] RockFormer: A U-Shaped Transformer Network for Martian Rock Segmentation
    Liu, Haiqiang
    Yao, Meibao
    Xiao, Xueming
    Xiong, Yonggang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [3] MBUTransNet: multi-branch U-shaped network fusion transformer architecture for medical image segmentation
    Qiao, JunBo
    Wang, Xing
    Chen, Ji
    Liu, MingTao
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 18 (10) : 1895 - 1902
  • [4] MBUTransNet: multi-branch U-shaped network fusion transformer architecture for medical image segmentation
    JunBo Qiao
    Xing Wang
    Ji Chen
    MingTao Liu
    International Journal of Computer Assisted Radiology and Surgery, 2023, 18 : 1895 - 1902
  • [5] A U-Shaped Convolution-Aided Transformer with Double Attention for Hyperspectral Image Classification
    Qin, Ruiru
    Wang, Chuanzhi
    Wu, Yongmei
    Du, Huafei
    Lv, Mingyun
    REMOTE SENSING, 2024, 16 (02)
  • [6] U2-Former: Nested U-Shaped Transformer for Image Restoration via Multi-View Contrastive Learning
    Feng, Xin
    Ji, Haobo
    Pei, Wenjie
    Li, Jinxing
    Lu, Guangming
    Zhang, David
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (01) : 168 - 181
  • [7] GUFORMER: a gradient-aware U-shaped transformer neural network for real image denoising
    Bai, Xuefei
    Wan, Yongsong
    Wang, Weiming
    Zhou, Bin
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01)
  • [8] Permutation invariant self-attention infused U-shaped transformer for medical image segmentation
    Patil, Sanjeet S.
    Ramteke, Manojkumar
    Rathore, Anurag S.
    NEUROCOMPUTING, 2025, 625
  • [9] U-Shaped Transformer With Frequency-Band Aware Attention for Speech Enhancement
    Li, Yi
    Sun, Yang
    Wang, Wenwu
    Naqvi, Syed Mohsen
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 1511 - 1521
  • [10] A deep supervised transformer U-shaped full-resolution residual network for the segmentation of breast ultrasound image
    Zhou, Jiale
    Hou, Zuoxun
    Lu, Hongyan
    Wang, Wenhan
    Zhao, Wanchen
    Wang, Zenan
    Zheng, Dezhi
    Wang, Shuai
    Tang, Wenzhong
    Qu, Xiaolei
    MEDICAL PHYSICS, 2023, 50 (12) : 7513 - 7524