Lightweight detection networks for tea bud on complex agricultural environment via improved YOLO v4

被引:62
|
作者
Li, Jie [1 ]
Li, Jiehao [1 ,3 ]
Zhao, Xin [1 ,2 ]
Su, Xiaohang [3 ]
Wu, Weibin [1 ,2 ]
机构
[1] South China Agr Univ, Coll Engn, Key Lab Key Technol Agr Machine & Equipment, Minist Educ, Guangzhou 510642, Peoples R China
[2] Guangdong Engn Technol Res Ctr Creat Hilly Orchard, Guangzhou 510642, Peoples R China
[3] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
基金
中国国家自然科学基金;
关键词
Target detection; Neural networks; Lightweight network; Tea bud; Loss function; RECOGNITION;
D O I
10.1016/j.compag.2023.107955
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Rapid and accurate detection of tender tea buds in the natural tea garden environment is the basis for intelligent tea picking. However, complex models result in high hardware computing power requirements and limit the deployment of tea bud recognition models in tea-picking robots. Therefore, this paper investigates a high-precision and lightweight target detection model based on improved you-only-look-once version 4(YOLOv4). The lightweight network GhostNet is used to replace the backbone of YOLOv4, and the depthwise separable convolution is designed to substitute the standard convolution, significantly reducing the computational load and computational complexity of the model. Additionally, the convolution block attention module (CBAM) is embedded into the path aggregation network (PANet), which enhances the model's feature extraction capability. To solve the problem of detection and distinction caused by overlapping of one-bud-one-leaf and one-bud-two-leaf, this paper proposes the CIoU loss function in YOLOv4 to the SIoU loss function. The SIoU loss function considers the vector angles of the ground truth box and the prediction box and redefines the penalty indicator to improve the training speed and detection accuracy of the model. The experimental results show that the detection accuracy of the proposed approach is 85.15% for the one-bud-one-leaf and one-bud-two-leaf. The giga floating point operations per second (GFlops) and parameters are 6.594 G and 11.353 M. Relative to the original YOLOv4, the proposed algorithm's mean accuracy is improved by 1.08%, 89.11% reduces the computational complexity, and 82.36% reduces the number of parameters. The Tea-YOLO algorithm demonstrates significantly better actual detection performance in different angles and natural environments compared to the YOLOv4 algorithm. The algorithm proposed in this paper can detect one-bud-one-leaf and one-bud-two-leaf quickly and accurately, which reduces the cost and difficulty of deploying the vision module of the tea-picking robot.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Lightweight Underwater Object Detection Based on YOLO v4 and Multi-Scale Attentional Feature Fusion
    Zhang, Minghua
    Xu, Shubo
    Song, Wei
    He, Qi
    Wei, Quanmiao
    REMOTE SENSING, 2021, 13 (22)
  • [22] Detection Method of Clods and Stones from Impurified Potatoes Based on Improved YOLO v4 Algorithm
    Wang X.
    Li Y.
    Yang Z.
    Zhang M.
    Wang R.
    Cui L.
    2021, Chinese Society of Agricultural Machinery (52): : 241 - 247and262
  • [23] Ship detection of coast defense radar in real marine environment based on fast YOLO V4
    Yan, He
    Chen, Chao
    Sun, Xiaohang
    Li, Yibing
    Geng, Zhe
    Zhang, Jindong
    Zhua, Daiyin
    JOURNAL OF APPLIED REMOTE SENSING, 2022, 16 (02)
  • [24] Detection Method of Double Side Breakage of Population Cotton Seed Based on Improved YOLO v4
    Wang, Qiaohua
    Gu, Wei
    Cai, Peizhong
    Zhang, Hongzhou
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2022, 53 (01): : 389 - 397
  • [25] Lightweight tea bud detection method based on improved YOLOv5
    Zhang, Kun
    Yuan, Bohan
    Cui, Jingying
    Liu, Yuyang
    Zhao, Long
    Zhao, Hua
    Chen, Shuangchen
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [26] Detecting defects in fused deposition modeling based on improved YOLO v4
    Xu, Luyang
    Zhang, Xiaoxun
    Ma, Fang
    Chang, Gaoyuan
    Zhang, Cheng
    Li, Jiaming
    Wang, Shuxian
    Huang, Yuanyou
    MATERIALS RESEARCH EXPRESS, 2023, 10 (09)
  • [27] Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
    Kumari, Niharika
    Ruf, Verena
    Mukhametov, Sergey
    Schmidt, Albrecht
    Kuhn, Jochen
    Kuechemann, Stefan
    SENSORS, 2021, 21 (22)
  • [28] Corn Seed Appearance Quality Estimation Based on Improved YOLO v4
    Fan X.
    Wang L.
    Liu J.
    Zhou Y.
    Zhang J.
    Suo X.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2022, 53 (07): : 226 - 233
  • [29] Urechis unicinctus Burrows Recognition Method Based on Improved YOLO v4
    Feng J.
    Liang X.
    Zeng L.
    Song X.
    Zhou X.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2023, 54 (02): : 265 - 274
  • [30] An improved algorithm for small object detection based on YOLO v4 and multi-scale contextual information
    Ji, Shu-Jun
    Ling, Qing-Hua
    Han, Fei
    COMPUTERS & ELECTRICAL ENGINEERING, 2023, 105