A MODIFIED U-NET FOR OIL SPILL SEMANTIC SEGMENTATION IN SAR IMAGES

被引:0
|
作者
Chang, Lena [1 ,2 ]
Chen, Yi-Ting [3 ]
Chang, Yang-Lang [4 ]
机构
[1] Natl Taiwan Ocean Univ, Dept Commun Nav & Control Engn, Keelung, Taiwan
[2] Natl Taiwan Ocean Univ, Intelligent Maritime Res IMRC, Keelung, Taiwan
[3] Natl Taiwan Ocean Univ, Dept Elect Engn, Keelung, Taiwan
[4] Natl Taipei Univ Technol, Dept Elect Engn, Taipei, Taiwan
来源
IGARSS 2024-2024 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, IGARSS 2024 | 2024年
关键词
SAR; oil spills; look-alikes; segmentation;
D O I
10.1109/IGARSS53475.2024.10642291
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
Oil spills are considered one of the major threats to the marine and coastal environment. Synthetic aperture radar (SAR) sensors are frequently employed for this purpose due to their ability to operate effectively under various weather and illumination conditions. SAR can clearly capture oil spills with distinctive radar backscatter intensity, resulting in dark regions in the images. This characteristic enables the monitoring and automatic detection of oil spills in SAR imagery. U-Net stands as one of the commonly employed semantic segmentation models, known for its ability to achieve superior segmentation performance even with limited training data. In this study, a modified lightweight U-Net model was introduced to enhance the performance of maritime multi-class segmentation in SAR images. First, a lightweight MobileNetv3 model served as the backbone for the U-Net encoder to perform feature extraction. Secondly, the convolutional block attention module (CBAM) was employed to enhance the network's capability in extracting multiscale features and to expedite the module calculation speed. The experimental results showed that the detection accuracy of the proposed method can achieve 77.07% of the mean Intersection-Over-Union ( mIOU). Compared with the original U-Net model, the proposed architecture can improve the mIOU about 4.88%.
引用
收藏
页码:2945 / 2948
页数:4
相关论文
共 50 条
  • [1] Group Equivariant U-Net for the Semantic Segmentation of SAR Images
    Turkmenli, Ilter
    Aptoula, Erchan
    Kayabol, Koray
    2022 30TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2022,
  • [2] Breast tumor segmentation in ultrasound images: comparing U-net and U-net + +
    de Oliveira, Carlos Eduardo Gonçalves
    Vieira, Sílvio Leão
    Paranaiba, Caio Felipe Brito
    Itikawa, Emerson Nobuyuki
    Research on Biomedical Engineering, 2025, 41 (01)
  • [3] Improved Brain Tumor Segmentation in MR Images with a Modified U-Net
    Alquran, Hiam
    Alslatie, Mohammed
    Rababah, Ali
    Mustafa, Wan Azani
    APPLIED SCIENCES-BASEL, 2024, 14 (15):
  • [5] Full-Scale Aggregated MobileUNet: An Improved U-Net Architecture for SAR Oil Spill Detection
    Chen, Yi-Ting
    Chang, Lena
    Wang, Jung-Hua
    SENSORS, 2024, 24 (12)
  • [6] Multifeature Semantic Complementation Network for Marine Oil Spill Localization and Segmentation Based on SAR Images
    Fan, Jianchao
    Zhang, Shuai
    Wang, Xinzhe
    Xing, Jun
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 3771 - 3783
  • [7] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Kumar T. Rajamani
    Priya Rani
    Hanna Siebert
    Rajkumar ElagiriRamalingam
    Mattias P. Heinrich
    Signal, Image and Video Processing, 2023, 17 : 981 - 989
  • [8] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Rajamani, Kumar T.
    Rani, Priya
    Siebert, Hanna
    ElagiriRamalingam, Rajkumar
    Heinrich, Mattias P.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 981 - 989
  • [9] Modified U-Net for cytological medical image segmentation
    Benazzouz, Mourtada
    Benomar, Mohammed Lamine
    Moualek, Youcef
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2022, 32 (05) : 1761 - 1773
  • [10] Modified U-NET Architecture for Segmentation of Skin Lesion
    Anand, Vatsala
    Gupta, Sheifali
    Koundal, Deepika
    Nayak, Soumya Ranjan
    Barsocchi, Paolo
    Bhoi, Akash Kumar
    SENSORS, 2022, 22 (03)