Camellia oleifera trunks detection and identification based on improved YOLOv7

被引:0
作者
Wang, Haorui [1 ]
Liu, Yang [1 ,2 ]
Luo, Hong [1 ]
Luo, Yuanyin [1 ]
Zhang, Yuyan [1 ]
Long, Fei [1 ]
Li, Lijun [1 ]
机构
[1] Cent South Univ Forestry & Technol, Engn Res Ctr Forestry Equipment Hunan Prov, Changsha, Peoples R China
[2] Hunan Automot Engn Vocat Univ, Zhuzhou, Peoples R China
关键词
attention mechanism; Camellia oleifera trunks; DSConv; EIoU; YOLOv7;
D O I
10.1002/cpe.8265
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Camellia oleifera typically thrives in unstructured environments, making the identification of its trunks crucial for advancing agricultural robots towards modernization and sustainability. Traditional target detection algorithms, however, fall short in accurately identifying Camellia oleifera trunks, especially in scenarios characterized by small targets and poor lighting. This article introduces an enhanced trunk detection algorithm for Camellia oleifera based on an improved YOLOv7 model. This model incorporates dynamic snake convolution instead of standard convolutions to bolster its feature extraction capabilities. It integrates more contextual information, thus enhancing the model's generalization ability across various scenes. Additionally, coordinate attention is introduced to refine the model's spatial feature representation, amplifying the network's focus on essential target region features, which in turn boosts detection accuracy and robustness. This feature selectively strengthens response levels across different channels, prioritizing key attributes for classification and localization. Moreover, the original coordinate loss function of YOLOv7 is replaced with EIoU loss, further enhancing the model's robustness and convergence speed. Experimental results demonstrate a recall rate of 96%, a mean average precision (mAP) of 87.9%, an F1 score of 0.87, and a detection speed of 18 milliseconds per frame. When compared with other models like Faster-RCNN, YOLOv3, ScaledYOLOv4, YOLOv5, and the original YOLOv7, our improved model shows mAP increases of 8.1%, 7.0%, 7.5%, and 6.6% respectively. Occupying only 70.8 MB, our model requires 9.8 MB less memory than the original YOLOv7. This model not only achieves high accuracy and detection efficiency but is also easily deployable on mobile devices, providing a robust foundation for future intelligent harvesting technologies.
引用
收藏
页数:14
相关论文
共 25 条
  • [21] Wanqi W., 2024, MEAS SCI TECHNOL, V35
  • [22] Wei X., 2022, BIORESOURCE TECHNOL, V365
  • [23] Focal and efficient IOU loss for accurate bounding box regression
    Zhang, Yi-Fan
    Ren, Weiqiang
    Zhang, Zhang
    Jia, Zhen
    Wang, Liang
    Tan, Tieniu
    [J]. NEUROCOMPUTING, 2022, 506 (146-157) : 146 - 157
  • [24] Object Detection Based on an Improved YOLOv7 Model for Unmanned Aerial-Vehicle Patrol Tasks in Controlled Areas
    Zhao, Dewei
    Shao, Faming
    Yang, Li
    Luo, Xiannan
    Liu, Qiang
    Zhang, Heng
    Zhang, Zihan
    [J]. ELECTRONICS, 2023, 12 (23)
  • [25] Zheng Y., GFDSSD GATED FUSION