LAG: Layered Objects to Generate Better Anchors for Object Detection in Aerial Images

被引:6
作者
Wan, Xueqiang [1 ]
Yu, Jiong [1 ,2 ]
Tan, Haotian [2 ]
Wang, Junjie [1 ]
机构
[1] Xinjiang Univ, Sch Software, Urumqi 830091, Peoples R China
[2] Xinjiang Univ, Coll Informat Sci & Engn, Urumqi 830046, Peoples R China
基金
中国国家自然科学基金;
关键词
anchor generation algorithm; object detection; YOLO; aerial images;
D O I
10.3390/s22103891
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
You Only Look Once (YOLO) series detectors are suitable for aerial image object detection because of their excellent real-time ability and performance. Their high performance depends heavily on the anchor generated by clustering the training set. However, the effectiveness of the general Anchor Generation algorithm is limited by the unique data distribution of the aerial image dataset. The divergence in the distribution of the number of objects with different sizes can cause the anchors to overfit some objects or be assigned to suboptimal layers because anchors of each layer are generated uniformly and affected by the overall data distribution. In this paper, we are inspired by experiments under different anchors settings and proposed the Layered Anchor Generation (LAG) algorithm. In the LAG, objects are layered by their diagonals, and then anchors of each layer are generated by analyzing the diagonals and aspect ratio of objects of the corresponding layer. In this way, anchors of each layer can better match the detection range of each layer. Experiment results showed that our algorithm is of good generality that significantly uprises the performance of You Only Look Once version 3 (YOLOv3), You Only Look Once version 5 (YOLOv5), You Only Learn One Representation (YOLOR), and Cascade Regions with CNN features (Cascade R-CNN) on the Vision Meets Drone (VisDrone) dataset and the object DetectIon in Optical Remote sensing images (DIOR) dataset, and these improvements are cost-free.
引用
收藏
页数:18
相关论文
共 51 条
[31]   MobileNetV2: Inverted Residuals and Linear Bottlenecks [J].
Sandler, Mark ;
Howard, Andrew ;
Zhu, Menglong ;
Zhmoginov, Andrey ;
Chen, Liang-Chieh .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4510-4520
[32]   Fuzzy C-Means clustering through SSIM and patch for image segmentation [J].
Tang, Yiming ;
Ren, Fuji ;
Pedrycz, Witold .
APPLIED SOFT COMPUTING, 2020, 87
[33]   FCOS: Fully Convolutional One-Stage Object Detection [J].
Tian, Zhi ;
Shen, Chunhua ;
Chen, Hao ;
He, Tong .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9626-9635
[34]   Hyperspectral Image Segmentation Using a New Spectral Unmixing-Based Binary Partition Tree Representation [J].
Veganzones, Miguel A. ;
Tochon, Guillaume ;
Dalla-Mura, Mauro ;
Plaza, Antonio J. ;
Chanussot, Jocelyn .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (08) :3574-3589
[35]  
Wang C., 2021, arXiv
[36]   CSPNet: A New Backbone that can Enhance Learning Capability of CNN [J].
Wang, Chien-Yao ;
Liao, Hong-Yuan Mark ;
Wu, Yueh-Hua ;
Chen, Ping-Yang ;
Hsieh, Jun-Wei ;
Yeh, I-Hau .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :1571-1580
[37]   DECA: a novel multi-scale efficient channel attention module for object detection in real-life fire images [J].
Wang, Junjie ;
Yu, Jiong ;
He, Zhu .
APPLIED INTELLIGENCE, 2022, 52 (02) :1362-1375
[38]   NAS-FCOS: Fast Neural Architecture Search for Object Detection [J].
Wang, Ning ;
Gao, Yang ;
Chen, Hao ;
Wang, Peng ;
Tian, Zhi ;
Shen, Chunhua ;
Zhang, Yanning .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11940-11948
[39]  
Wang Q.L., 2019, ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks, DOI DOI 10.1109/CVPR42600.2020.01155
[40]   Aggregated Residual Transformations for Deep Neural Networks [J].
Xie, Saining ;
Girshick, Ross ;
Dollar, Piotr ;
Tu, Zhuowen ;
He, Kaiming .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5987-5995