Corner-Point and Foreground-Area IoU Loss: Better Localization of Small Objects in Bounding Box Regression

被引:6
作者
Cai, Delong [1 ,2 ]
Zhang, Zhaoyun [1 ]
Zhang, Zhi [1 ]
机构
[1] DongGuan Univ Technol, Sch Elect Engn & Intelligentizat, Dongguan 523000, Peoples R China
[2] DongGuan Univ Technol, Sch Comp Sci & Technol, Dongguan 523000, Peoples R China
关键词
object detection; loss function; small object; bounding box regression;
D O I
10.3390/s23104961
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Bounding box regression is a crucial step in object detection, directly affecting the localization performance of the detected objects. Especially in small object detection, an excellent bounding box regression loss can significantly alleviate the problem of missing small objects. However, there are two major problems with the broad Intersection over Union (IoU) losses, also known as Broad IoU losses (BIoU losses) in bounding box regression: (i) BIoU losses cannot provide more effective fitting information for predicted boxes as they approach the target box, resulting in slow convergence and inaccurate regression results; (ii) most localization loss functions do not fully utilize the spatial information of the target, namely the target's foreground area, during the fitting process. Therefore, this paper proposes the Corner-point and Foreground-area IoU loss (CFIoU loss) function by delving into the potential for bounding box regression losses to overcome these issues. First, we use the normalized corner point distance between the two boxes instead of the normalized center-point distance used in the BIoU losses, which effectively suppresses the problem of BIoU losses degrading to IoU loss when the two boxes are close. Second, we add adaptive target information to the loss function to provide richer target information to optimize the bounding box regression process, especially for small object detection. Finally, we conducted simulation experiments on bounding box regression to validate our hypothesis. At the same time, we conducted quantitative comparisons of the current mainstream BIoU losses and our proposed CFIoU loss on the small object public datasets VisDrone2019 and SODA-D using the latest anchor-based YOLOv5 and anchor-free YOLOv8 object detection algorithms. The experimental results demonstrate that YOLOv5s (+3.12% Recall, +2.73% mAP@0.5, and +1.91% mAP@0.5:0.95) and YOLOv8s (+1.72% Recall and +0.60% mAP@0.5), both incorporating the CFIoU loss, achieved the highest performance improvement on the VisDrone2019 test set. Similarly, YOLOv5s (+6% Recall, +13.08% mAP@0.5, and +14.29% mAP@0.5:0.95) and YOLOv8s (+3.36% Recall, +3.66% mAP@0.5, and +4.05% mAP@0.5:0.95), both incorporating the CFIoU loss, also achieved the highest performance improvement on the SODA-D test set. These results indicate the effectiveness and superiority of the CFIoU loss in small object detection. Additionally, we conducted comparative experiments by fusing the CFIoU loss and the BIoU loss with the SSD algorithm, which is not proficient in small object detection. The experimental results demonstrate that the SSD algorithm incorporating the CFIoU loss achieved the highest improvement in the AP (+5.59%) and AP75 (+5.37%) metrics, indicating that the CFIoU loss can also improve the performance of algorithms that are not proficient in small object detection.
引用
收藏
页数:17
相关论文
共 24 条
  • [11] Libra R-CNN: Towards Balanced Learning for Object Detection
    Pang, Jiangmiao
    Chen, Kai
    Shi, Jianping
    Feng, Huajun
    Ouyang, Wanli
    Lin, Dahua
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 821 - 830
  • [12] Qi Qian, 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Proceedings, P12161, DOI 10.1109/CVPR42600.2020.01218
  • [13] ThunderNet: Towards Real-time Generic Object Detection on Mobile Devices
    Qin, Zheng
    Li, Zeming
    Zhang, Zhaoning
    Bao, Yiping
    Yu, Gang
    Peng, Yuxing
    Sun, Jian
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6717 - 6726
  • [14] You Only Look Once: Unified, Real-Time Object Detection
    Redmon, Joseph
    Divvala, Santosh
    Girshick, Ross
    Farhadi, Ali
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 779 - 788
  • [15] Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
    Ren, Shaoqing
    He, Kaiming
    Girshick, Ross
    Sun, Jian
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) : 1137 - 1149
  • [16] Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression
    Rezatofighi, Hamid
    Tsoi, Nathan
    Gwak, JunYoung
    Sadeghian, Amir
    Reid, Ian
    Savarese, Silvio
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 658 - 666
  • [17] FCOS: Fully Convolutional One-Stage Object Detection
    Tian, Zhi
    Shen, Chunhua
    Chen, Hao
    He, Tong
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9626 - 9635
  • [18] Ultralytics YOLOv5, About us
  • [19] Yu J., 2016, P ACM INT C MULT, P516
  • [20] Yuhang Cao, 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Proceedings, P11580, DOI 10.1109/CVPR42600.2020.01160