Semantic Segmentation of Aerial Imagery Using U-Net with Self-Attention and Separable Convolutions

被引:1
|
作者
Khan, Bakht Alam [1 ]
Jung, Jin-Woo [1 ]
机构
[1] Dongguk Univ, Dept Comp Sci & Engn, Seoul 04620, South Korea
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 09期
关键词
semantic segmentation; U-Net; self-attention; separable convolutions; aerial imagery; remote sensing; RESOLUTION; SATELLITE; NETWORK;
D O I
10.3390/app14093712
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This research addresses the crucial task of improving accuracy in the semantic segmentation of aerial imagery, essential for applications such as urban planning and environmental monitoring. This study emphasizes the significance of maintaining the Intersection over Union (IOU) score as a metric and employs data augmentation with the Patchify library, using a patch size of 256, to effectively augment the dataset, which is subsequently split into training and testing sets. The core of this investigation lies in a novel architecture that combines a U-Net framework with self-attention mechanisms and separable convolutions. The introduction of self-attention mechanisms enhances the model's understanding of image context, while separable convolutions expedite the training process, contributing to overall efficiency. The proposed model demonstrates a substantial accuracy improvement, surpassing the previous state-of-the-art Dense Plus U-Net, achieving an accuracy of 91% compared to the former's 86%. Visual representations, including original patch images, original masked patches, and predicted patch masks, showcase the model's proficiency in semantic segmentation, marking a significant advancement in aerial image analysis and underscoring the importance of innovative architectural elements for enhanced accuracy and efficiency in such tasks.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] SAU-Net: Medical Image Segmentation Method Based on U-Net and Self-Attention
    Zhang S.-J.
    Peng Z.
    Li H.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2022, 50 (10): : 2433 - 2442
  • [2] Self-Attention in Reconstruction Bias U-Net for Semantic Segmentation of Building Rooftops in Optical Remote Sensing Images
    Chen, Ziyi
    Li, Dilong
    Fan, Wentao
    Guan, Haiyan
    Wang, Cheng
    Li, Jonathan
    REMOTE SENSING, 2021, 13 (13)
  • [3] U-Net Ensemble for Enhanced Semantic Segmentation in Remote Sensing Imagery
    Dimitrovski, Ivica
    Spasev, Vlatko
    Loshkovska, Suzana
    Kitanovski, Ivan
    REMOTE SENSING, 2024, 16 (12)
  • [4] DRSU-net: Depth-Residual Separable U-net model for Semantic Segmentation
    Arbane, Mohamed
    Khanouche, Mohamed Essaid
    Khodabandelou, Ghazaleh
    Abdelghani, Chibani
    Amirat, Yacine
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] FsaNet: Frequency Self-Attention for Semantic Segmentation
    Zhang, Fengyu
    Panahi, Ashkan
    Gao, Guangjun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 4757 - 4772
  • [6] RSU-Net: U-net based on residual and self-attention mechanism in the segmentation of cardiac magnetic resonance images
    Li, Yuan-Zhe
    Wang, Yi
    Huang, Yin-Hui
    Xiang, Ping
    Liu, Wen-Xi
    Lai, Qing-Quan
    Gao, Yi-Yuan
    Xu, Mao-Sheng
    Guo, Yi-Fan
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 231
  • [7] Bilateral U-Net semantic segmentation with spatial attention mechanism
    Zhao Guangzhe
    Zhang Yimeng
    Maoning Ge
    Yu Min
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2023, 8 (02) : 297 - 307
  • [8] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Kumar T. Rajamani
    Priya Rani
    Hanna Siebert
    Rajkumar ElagiriRamalingam
    Mattias P. Heinrich
    Signal, Image and Video Processing, 2023, 17 : 981 - 989
  • [9] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Rajamani, Kumar T.
    Rani, Priya
    Siebert, Hanna
    ElagiriRamalingam, Rajkumar
    Heinrich, Mattias P.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 981 - 989
  • [10] Semantic Segmentation using Modified U-Net for Autonomous Driving
    Sugirtha, T.
    Sridevi, M.
    2022 IEEE INTERNATIONAL IOT, ELECTRONICS AND MECHATRONICS CONFERENCE (IEMTRONICS), 2022, : 831 - 837