DS-YOLOv8-Based Object Detection Method for Remote Sensing Images

被引:66
作者
Shen, Lingyun [1 ]
Lang, Baihe [2 ]
Song, Zhengxun [2 ,3 ]
机构
[1] Taiyuan Inst Technol, Dept Elect Engn, Taiyuan 030008, Peoples R China
[2] Changchun Univ Sci & Technol, Sch Elect & Informat Engn, Changchun 130022, Peoples R China
[3] Changchun Univ Sci & Technol, Overseas Expertise Intro Project Discipline Innova, Changchun 130022, Peoples R China
关键词
Object detection; deformable convolution; shuffle attention; self-calibration; wise-IoU; NETWORK;
D O I
10.1109/ACCESS.2023.3330844
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The improved YOLOv8 model (DCN_C2f+SC_SA+YOLOv8, hereinafter referred to as DS-YOLOv8) is proposed to address object detection challenges in complex remote sensing image tasks. It aims to overcome limitations such as the restricted receptive field caused by fixed convolutional kernels in the YOLO backbone network and the inadequate multi-scale feature learning capabilities resulting from the spatial and channel attention fusion mechanism's inability to adapt to the input data's feature distribution. The DS-YOLOv8 model introduces the Deformable Convolution C2f (DCN_C2f) module in the backbone network to enable adaptive adjustment of the network's receptive field. Additionally, a lightweight Self-Calibrating Shuffle Attention (SC_SA) module is designed for spatial and channel attention mechanisms. This design choice allows for adaptive encoding of contextual information, preventing the loss of feature details caused by convolution iterations and improving the representation capability of multi-scale, occluded, and small object features. Moreover, the DS-YOLOv8 model incorporates the dynamic non-monotonic focus mechanism of Wise-IoU and employs a position regression loss function to further enhance its performance. Experimental results demonstrate the excellent performance of the DS-YOLOv8 model on various public datasets, including RSOD, NWPU VHR-10, DIOR, and VEDAI. The average mAP@0.5 values achieved are 97.7%, 92.9%, 89.7%, and 78.9%, respectively. Similarly, the average mAP@0.5:0.95 values are observed to be 74.0%, 64.3%, 70.7%, and 51.1%. Importantly, the model maintains real-time inference capabilities. In comparison to the YOLOv8 series models, the DS-YOLOv8 model demonstrates significant performance improvements and outperforms other mainstream models in terms of detection accuracy.
引用
收藏
页码:125122 / 125137
页数:16
相关论文
共 45 条
[1]   Cascade R-CNN: Delving into High Quality Object Detection [J].
Cai, Zhaowei ;
Vasconcelos, Nuno .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6154-6162
[2]   GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond [J].
Cao, Yue ;
Xu, Jiarui ;
Lin, Stephen ;
Wei, Fangyun ;
Hu, Han .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, :1971-1980
[3]   TEANS: A Target Enhancement and Attenuated Nonmaximum Suppression Object Detector for Remote Sensing Images [J].
Chen, Hai-Bao ;
Jiang, Shan ;
He, Guanghui ;
Zhang, Bingyi ;
Yu, Hao .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2021, 18 (04) :632-636
[4]   Improved YOLOv3 Based on Attention Mechanism for Fast and Accurate Ship Detection in Optical Remote Sensing Images [J].
Chen, Liqiong ;
Shi, Wenxuan ;
Deng, Dexiang .
REMOTE SENSING, 2021, 13 (04) :1-18
[5]   Learning Slimming SAR Ship Object Detector Through Network Pruning and Knowledge Distillation [J].
Chen, Shiqi ;
Zhan, Ronghui ;
Wang, Wei ;
Zhang, Jun .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 :1267-1282
[6]   Deformable Convolutional Networks [J].
Dai, Jifeng ;
Qi, Haozhi ;
Xiong, Yuwen ;
Li, Yi ;
Zhang, Guodong ;
Hu, Han ;
Wei, Yichen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :764-773
[7]  
Das P., 2023, P 3 INT C INT TECHN, P1, DOI [10.1109/CONIT59222.2023.10205601, DOI 10.1109/CONIT59222.2023.10205601]
[8]   Sig-NMS-Based Faster R-CNN Combining Transfer Learning for Small Target Detection in VHR Optical Remote Sensing Imagery [J].
Dong, Ruchan ;
Xu, Dazhuan ;
Zhao, Jin ;
Jiao, Licheng ;
An, Jungang .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (11) :8534-8545
[9]  
Gavrilescu R, 2018, INT CONF EXPO ELECTR, P165, DOI 10.1109/ICEPE.2018.8559776
[10]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448