Comparison of CNNs and Vision Transformers-Based Hybrid Models Using Gradient Profile Loss for Classification of Oil Spills in SAR Images

被引:17
作者
Basit, Abdul [1 ]
Siddique, Muhammad Adnan [1 ]
Bhatti, Muhammad Khurram [1 ]
Sarfraz, Muhammad Saquib [2 ]
机构
[1] Informat Technol Univ Punjab ITU, Remote Sensing & Spatial Analyt Lab, Lahore 54000, Pakistan
[2] Karlsruhe Inst Technol KIT, Inst Anthropomat & Robot, D-76131 Karlsruhe, Germany
关键词
oil spills; synthetic aperture radar (SAR); deep convolutional neural networks (DCNNs); vision transformers (ViTs); deep learning; semantic segmentation; marine pollution; remote sensing; NEURAL-NETWORK; SEGMENTATION;
D O I
10.3390/rs14092085
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Oil spillage over a sea or ocean surface is a threat to marine and coastal ecosystems. Spaceborne synthetic aperture radar (SAR) data have been used efficiently for the detection of oil spills due to their operational capability in all-day all-weather conditions. The problem is often modeled as a semantic segmentation task. The images need to be segmented into multiple regions of interest such as sea surface, oil spill, lookalikes, ships, and land. Training of a classifier for this task is particularly challenging since there is an inherent class imbalance. In this work, we train a convolutional neural network (CNN) with multiple feature extractors for pixel-wise classification and introduce a new loss function, namely, "gradient profile" (GP) loss, which is in fact the constituent of the more generic spatial profile loss proposed for image translation problems. For the purpose of training, testing, and performance evaluation, we use a publicly available dataset with selected oil spill events verified by the European Maritime Safety Agency (EMSA). The results obtained show that the proposed CNN trained with a combination of GP, Jaccard, and focal loss functions can detect oil spills with an intersection over union (IoU) value of 63.95%. The IoU value for sea surface, lookalikes, ships, and land class is 96.00%, 60.87%, 74.61%, and 96.80%, respectively. The mean intersection over union (mIoU) value for all the classes is 78.45%, which accounts for a 13% improvement over the state of the art for this dataset. Moreover, we provide extensive ablation on different convolutional neural networks (CNNs) and vision transformers (ViTs)-based hybrid models to demonstrate the effectiveness of adding GP loss as an additional loss function for training. Results show that GP loss significantly improves the mIoU and F-1 scores for CNNs as well as ViTs-based hybrid models. GP loss turns out to be a promising loss function in the context of deep learning with SAR images.
引用
收藏
页数:18
相关论文
共 46 条
  • [41] Vaswani A, 2017, ADV NEUR IN, V30
  • [42] Vinyals O., 2015, P IEEE C COMPUTER VI
  • [43] Non-local Neural Networks
    Wang, Xiaolong
    Girshick, Ross
    Gupta, Abhinav
    He, Kaiming
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7794 - 7803
  • [44] A novel deep learning instance segmentation model for automated marine oil spill detection
    Yekeen, Shamsudeen Temitope
    Balogun, Abdul-Lateef
    Yusof, Khamaruzaman B. Wan
    [J]. ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 167 : 190 - 200
  • [45] A Deep Convolutional Neural Network for Oil Spill Detection from Spaceborne SAR Images
    Zeng, Kan
    Wang, Yixiao
    [J]. REMOTE SENSING, 2020, 12 (06)
  • [46] Deep Learning for Remote Sensing Data A technical tutorial on the state of the art
    Zhang, Liangpei
    Zhang, Lefei
    Du, Bo
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, 2016, 4 (02) : 22 - 40