Sugar Beet Damage Detection during Harvesting Using Different Convolutional Neural Network Models

被引:16
作者
Nasirahmadi, Abozar [1 ]
Wilczek, Ulrike [1 ]
Hensel, Oliver [1 ]
机构
[1] Univ Kassel, Dept Agr & Biosyst Engn, D-37213 Witzenhausen, Germany
来源
AGRICULTURE-BASEL | 2021年 / 11卷 / 11期
关键词
convolutional neural network; damage; deep learning; harvester; sugar beet; ALGORITHM;
D O I
10.3390/agriculture11111111
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Mechanical damages of sugar beet during harvesting affects the quality of the final products and sugar yield. The mechanical damage of sugar beet is assessed randomly by operators of harvesters and can depend on the subjective opinion and experience of the operator due to the complexity of the harvester machines. Thus, the main aim of this study was to determine whether a digital two-dimensional imaging system coupled with convolutional neural network (CNN) techniques could be utilized to detect visible mechanical damage in sugar beet during harvesting in a harvester machine. In this research, various detector models based on the CNN, including You Only Look Once (YOLO) v4, region-based fully convolutional network (R-FCN) and faster regions with convolutional neural network features (Faster R-CNN) were developed. Sugar beet image data during harvesting from a harvester in different farming conditions were used for training and validation of the proposed models. The experimental results showed that the YOLO v4 CSPDarknet53 method was able to detect damage in sugar beet with better performance (recall, precision and F1-score of about 92, 94 and 93%, respectively) and higher speed (around 29 frames per second) compared to the other developed CNNs. By means of a CNN-based vision system, it was possible to automatically detect sugar beet damage within the sugar beet harvester machine.
引用
收藏
页数:13
相关论文
共 38 条
  • [1] Bentini M, 2002, T ASAE, V45, P547, DOI 10.13031/2013.8848
  • [2] Bochkovskiy A., 2020, PREPRINT, DOI DOI 10.48550/ARXIV.2004.10934
  • [3] Ship Detection Based on YOLOv2 for SAR Imagery
    Chang, Yang-Lang
    Anagaw, Amare
    Chang, Lena
    Wang, Yi Chun
    Hsiao, Chih-Yu
    Lee, Wei-Hong
    [J]. REMOTE SENSING, 2019, 11 (07)
  • [4] A Smartphone-Based Application for Scale Pest Detection Using Multiple-Object Detection Methods
    Chen, Jian-Wen
    Lin, Wan-Ju
    Cheng, Hui-Jun
    Hung, Che-Lun
    Lin, Chun-Yuan
    Chen, Shu-Pei
    [J]. ELECTRONICS, 2021, 10 (04) : 1 - 14
  • [5] Chen W., 2020, SCI PROGRAMMING-NETH
  • [6] Visible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics
    da Silva, Daniel Queiros
    dos Santos, Filipe Neves
    Sousa, Armando Jorge
    Filipe, Vitor
    [J]. JOURNAL OF IMAGING, 2021, 7 (09)
  • [7] Dai JF, 2016, ADV NEUR IN, V29
  • [8] An evaluation of deep learning based object detection strategies for threat object detection in baggage security imagery
    Dhiraj
    Jain, Deepak Kumar
    [J]. PATTERN RECOGNITION LETTERS, 2019, 120 : 112 - 119
  • [9] Weak and Occluded Vehicle Detection in Complex Infrared Environment Based on Improved YOLOv4
    Du, Shuangjiang
    Zhang, Pin
    Zhang, Baofu
    Xu, Honghui
    [J]. IEEE ACCESS, 2021, 9 : 25671 - 25680
  • [10] A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition
    Fuentes, Alvaro
    Yoon, Sook
    Kim, Sang Cheol
    Park, Dong Sun
    [J]. SENSORS, 2017, 17 (09)