Rotated ship target detection algorithm in SAR images based on global feature fusion

被引:0
作者
Xue, Fengtao [1 ]
Sun, Tianyu [2 ]
Yang, Yimin [2 ]
Yang, Jian [2 ]
机构
[1] Beijing Institute of Remote Sensing Equipment, Beijing
[2] Department of Electronic Engineering, Tsinghua University, Beijing
来源
Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics | 2024年 / 46卷 / 12期
关键词
feature fusion; neural network; rotated target detection; ship detection; synthetic aperture radar (SAR);
D O I
10.12305/j.issn.1001-506X.2024.12.13
中图分类号
学科分类号
摘要
Conventional models are not effective in detecting inshore rotated ship targets in synthetic aperture radar (SAR) images. To solve this problem, a method of detecting rotated ship targets in SAR images based on global feature fusion is proposed. Firstly, the global attention feature pyramid network is used to fuse features of different levels, which shortens the transmission path from the bottom feature to the top feature. Secondly, positional embedding is added in the image block fusion stage to reduce the loss of location information caused by down-sampling. Finally, the rotated feature alignment network is used to generate high-quality anchor points and rotated alignment features for Classification and coordinate regression. The proposed method achieves an average detection precision of 0. 894 8 on the rotated ship detection dataset in SAR images (RSDD-SAR) dataset when the rotated intersection over union (IoU) is 0. 5. The proposed method has good detection Performance for both inshore and offshore ships. © 2024 Chinese Institute of Electronics. All rights reserved.
引用
收藏
页码:4044 / 4053
页数:9
相关论文
共 30 条
  • [21] ZHANG Y P, LU D D, QIU X L, Et al., Scattering-pomt-guided RPN for oriented ship detection in SAR images, Remote Sensing, 15, 5, pp. 1411-1432, (2023)
  • [22] HAN J M, DING J, LI J, Et al., Allgn deep features for oriented objeet detection, IEEE Trans, on Geoscience and Remote Sensing, 60, (2021)
  • [23] HE K M, ZHANG X Y, REN S Q, Et al., Deep residual learn-ing for image recognition, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, (2016)
  • [24] LIU Z, UN Y T, CAO Y, Et al., Swin-Transformer. hlerarchlcal vision transformer using shifted Windows, Proc. of the IEEE/CVF International Conference on Computer Vision, pp. 10012-10022, (2021)
  • [25] DOSOVITSKIY A, BEYER L, KOLESNI-KOV A, Et al., An image is worth 16X16 words: transformers for image recognition at scale
  • [26] UN T Y, DOLLAR P, GIRSHICK R, Et al., Feature pyramid net-works for objeet detection, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117-2125, (2017)
  • [27] YANG S, PEI Z Q, ZHOU F, Et al., Rotated faster R-CNN for oriented objeet detection in aerial images, Proc. of the 3rd International Conference on Robot Systems and Applications, pp. 35-39, (2020)
  • [28] UN T Y, GOYAL P, GIRSHICK R, Et al., Focal loss for dense objeet detection, Proc. of the IEEE International Conference on Computer Vision, pp. 2980-2988, (2017)
  • [29] XIE X X, CHENG G, WANG J B, Et al., Oriented R-CNN for objeet detection [C], Proc. of the IEEE/CVF International Conference on Computer Vision, pp. 3520-3529, (2021)
  • [30] YANG X, YAN J C, FENG Z M, Et al., R3Det: refined Single-stage detector with feature refinement for rotating object