DS-YOLOv8-Based Object Detection Method for Remote Sensing Images

被引:66
作者
Shen, Lingyun [1 ]
Lang, Baihe [2 ]
Song, Zhengxun [2 ,3 ]
机构
[1] Taiyuan Inst Technol, Dept Elect Engn, Taiyuan 030008, Peoples R China
[2] Changchun Univ Sci & Technol, Sch Elect & Informat Engn, Changchun 130022, Peoples R China
[3] Changchun Univ Sci & Technol, Overseas Expertise Intro Project Discipline Innova, Changchun 130022, Peoples R China
关键词
Object detection; deformable convolution; shuffle attention; self-calibration; wise-IoU; NETWORK;
D O I
10.1109/ACCESS.2023.3330844
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The improved YOLOv8 model (DCN_C2f+SC_SA+YOLOv8, hereinafter referred to as DS-YOLOv8) is proposed to address object detection challenges in complex remote sensing image tasks. It aims to overcome limitations such as the restricted receptive field caused by fixed convolutional kernels in the YOLO backbone network and the inadequate multi-scale feature learning capabilities resulting from the spatial and channel attention fusion mechanism's inability to adapt to the input data's feature distribution. The DS-YOLOv8 model introduces the Deformable Convolution C2f (DCN_C2f) module in the backbone network to enable adaptive adjustment of the network's receptive field. Additionally, a lightweight Self-Calibrating Shuffle Attention (SC_SA) module is designed for spatial and channel attention mechanisms. This design choice allows for adaptive encoding of contextual information, preventing the loss of feature details caused by convolution iterations and improving the representation capability of multi-scale, occluded, and small object features. Moreover, the DS-YOLOv8 model incorporates the dynamic non-monotonic focus mechanism of Wise-IoU and employs a position regression loss function to further enhance its performance. Experimental results demonstrate the excellent performance of the DS-YOLOv8 model on various public datasets, including RSOD, NWPU VHR-10, DIOR, and VEDAI. The average mAP@0.5 values achieved are 97.7%, 92.9%, 89.7%, and 78.9%, respectively. Similarly, the average mAP@0.5:0.95 values are observed to be 74.0%, 64.3%, 70.7%, and 51.1%. Importantly, the model maintains real-time inference capabilities. In comparison to the YOLOv8 series models, the DS-YOLOv8 model demonstrates significant performance improvements and outperforms other mainstream models in terms of detection accuracy.
引用
收藏
页码:125122 / 125137
页数:16
相关论文
共 45 条
[21]   SSD: Single Shot MultiBox Detector [J].
Liu, Wei ;
Anguelov, Dragomir ;
Erhan, Dumitru ;
Szegedy, Christian ;
Reed, Scott ;
Fu, Cheng-Yang ;
Berg, Alexander C. .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :21-37
[22]   Detection of Multiclass Objects in Optical Remote Sensing Images [J].
Liu, Wenchao ;
Ma, Long ;
Wang, Jue ;
Chen, He .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2019, 16 (05) :791-795
[23]   Swin Transformer: Hierarchical Vision Transformer using Shifted Windows [J].
Liu, Ze ;
Lin, Yutong ;
Cao, Yue ;
Hu, Han ;
Wei, Yixuan ;
Zhang, Zheng ;
Lin, Stephen ;
Guo, Baining .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9992-10002
[24]   A ConvNet for the 2020s [J].
Liu, Zhuang ;
Mao, Hanzi ;
Wu, Chao-Yuan ;
Feichtenhofer, Christoph ;
Darrell, Trevor ;
Xie, Saining .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :11966-11976
[25]   Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks [J].
Long, Yang ;
Gong, Yiping ;
Xiao, Zhifeng ;
Liu, Qing .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2017, 55 (05) :2486-2498
[26]   Attention and Feature Fusion SSD for Remote Sensing Object Detection [J].
Lu, Xiaocong ;
Ji, Jian ;
Xing, Zhiqi ;
Miao, Qiguang .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[27]  
Park J, 2018, Arxiv, DOI arXiv:1807.06514
[28]   Vehicle detection in aerial imagery : A small target detection benchmark [J].
Razakarivony, Sebastien ;
Jurie, Frederic .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 34 :187-203
[29]   YOLO9000: Better, Faster, Stronger [J].
Redmon, Joseph ;
Farhadi, Ali .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6517-6525
[30]   You Only Look Once: Unified, Real-Time Object Detection [J].
Redmon, Joseph ;
Divvala, Santosh ;
Girshick, Ross ;
Farhadi, Ali .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :779-788