ResU-Former: Advancing Remote Sensing Image Segmentation with Swin Residual Transformer for Precise Global-Local Feature Recognition and Visual-Semantic Space Learning

被引:3
作者
Li, Hanlu [1 ]
Li, Lei [2 ]
Zhao, Liangyu [1 ]
Liu, Fuxiang [1 ]
机构
[1] Beijing Inst Technol, Minist Educ, Key Lab Dynam & Control Flight Vehicle, Beijing 100081, Peoples R China
[2] Aerosp Tianmu Chongqing Satellite Sci & Technol Co, Chongqing 400000, Peoples R China
关键词
semantic segmentation; transformer; balance between visual and semantic space; enhancement of both global and local aspects;
D O I
10.3390/electronics13020436
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the field of remote sensing image segmentation, achieving high accuracy and efficiency in diverse and complex environments remains a challenge. Additionally, there is a notable imbalance between the underlying features and the high-level semantic information embedded within remote sensing images, and both global and local recognition improvements are also limited by the multi-scale remote sensing scenery and imbalanced class distribution. These challenges are further compounded by inaccurate local localization segmentation and the oversight of small-scale features. To achieve balance between visual space and semantic space, to increase both global and local recognition accuracy, and to enhance the flexibility of input scale features while supplementing global contextual information, in this paper, we propose a U-shaped hierarchical structure called ResU-Former. The incorporation of the Swin Residual Transformer block allows for the efficient segmentation of objects of varying sizes against complex backgrounds, a common scenario in remote sensing datasets. With the specially designed Swin Residual Transformer block as its fundamental unit, ResU-Former accomplishes the full utilization and evolution of information, and the maximum optimization of semantic segmentation in complex remote sensing scenarios. The standard experimental results on benchmark datasets such as Vaihingen, Overall Accuracy of 81.5%, etc., show the ResU-Former's potential to improve segmentation tasks across various remote sensing applications.
引用
收藏
页数:21
相关论文
共 39 条
  • [1] Attention Augmented Convolutional Networks
    Bello, Irwan
    Zoph, Barret
    Vaswani, Ashish
    Shlens, Jonathon
    Le, Quoc V.
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3285 - 3294
  • [2] Very High-Resolution Remote Sensing: Challenges and Opportunities
    Benediktsson, Jon Atli
    Chanussot, Jocelyn
    Moon, Wooil M.
    [J]. PROCEEDINGS OF THE IEEE, 2012, 100 (06) : 1907 - 1910
  • [3] The Lovasz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks
    Berman, Maxim
    Triki, Amal Rannen
    Blaschko, Matthew B.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4413 - 4421
  • [4] Accurate medium-range global weather forecasting with 3D neural networks
    Bi, Kaifeng
    Xie, Lingxi
    Zhang, Hengheng
    Chen, Xin
    Gu, Xiaotao
    Tian, Qi
    [J]. NATURE, 2023, 619 (7970) : 533 - +
  • [5] Cao Hu, 2023, Computer Vision - ECCV 2022 Workshops: Proceedings. Lecture Notes in Computer Science (13803), P205, DOI 10.1007/978-3-031-25066-8_9
  • [6] Chaurasia A, 2017, 2017 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)
  • [7] Chen LC, 2017, Arxiv, DOI arXiv:1706.05587
  • [8] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [9] DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
    Chen, Liang-Chieh
    Papandreou, George
    Kokkinos, Iasonas
    Murphy, Kevin
    Yuille, Alan L.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) : 834 - 848
  • [10] Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, 10.48550/arXiv.2010.11929, DOI 10.48550/ARXIV.2010.11929]