Deep Object Detection of Crop Weeds: Performance of YOLOv7 on a Real Case Dataset from UAV Images

被引:92
作者
Gallo, Ignazio [1 ]
Rehman, Anwar Ur [1 ]
Dehkordi, Ramin Heidarian [2 ]
Landro, Nicola [1 ]
La Grassa, Riccardo [3 ]
Boschetti, Mirco [2 ]
机构
[1] Univ Insubria, Dept Theoret & Appl Sci, I-20100 Varese, Italy
[2] CNR, Inst Electromagnet Sensing Environm, I-20133 Milan, Italy
[3] Italian Natl Inst Astrophys, I-00100 Rome, Italy
关键词
Deep Learning; Convolutional Neural Network; UAV imagery; object detection; YOLOv7; CONVOLUTIONAL NETWORKS; CLASSIFICATION;
D O I
10.3390/rs15020539
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Weeds are a crucial threat to agriculture, and in order to preserve crop productivity, spreading agrochemicals is a common practice with a potential negative impact on the environment. Methods that can support intelligent application are needed. Therefore, identification and mapping is a critical step in performing site-specific weed management. Unmanned aerial vehicle (UAV) data streams are considered the best for weed detection due to the high resolution and flexibility of data acquisition and the spatial explicit dimensions of imagery. However, with the existence of unstructured crop conditions and the high biological variation of weeds, it remains a difficult challenge to generate accurate weed recognition and detection models. Two critical barriers to tackling this challenge are related to (1) a lack of case-specific, large, and comprehensive weed UAV image datasets for the crop of interest, (2) defining the most appropriate computer vision (CV) weed detection models to assess the operationality of detection approaches in real case conditions. Deep Learning (DL) algorithms, appropriately trained to deal with the real case complexity of UAV data in agriculture, can provide valid alternative solutions with respect to standard CV approaches for an accurate weed recognition model. In this framework, this paper first introduces a new weed and crop dataset named Chicory Plant (CP) and then tests state-of-the-art DL algorithms for object detection. A total of 12,113 bounding box annotations were generated to identify weed targets (Mercurialis annua) from more than 3000 RGB images of chicory plantations, collected using a UAV system at various stages of crop and weed growth. Deep weed object detection was conducted by testing the most recent You Only Look Once version 7 (YOLOv7) on both the CP and publicly available datasets (Lincoln beet (LB)), for which a previous version of YOLO was used to map weeds and crops. The YOLOv7 results obtained for the CP dataset were encouraging, outperforming the other YOLO variants by producing value metrics of 56.6%, 62.1%, and 61.3% for the mAP@0.5 scores, recall, and precision, respectively. Furthermore, the YOLOv7 model applied to the LB dataset surpassed the existing published results by increasing the mAP@0.5 scores from 51% to 61%, 67.5% to 74.1%, and 34.6% to 48% for the total mAP, mAP for weeds, and mAP for sugar beets, respectively. This study illustrates the potential of the YOLOv7 model for weed detection but remarks on the fundamental needs of large-scale, annotated weed datasets to develop and evaluate models in real-case field circumstances.
引用
收藏
页数:17
相关论文
共 54 条
  • [51] Effective plant discrimination based on the combination of local binary pattern operators and multiclass support vector machine methods
    Nguyen Thanh Le V.
    Apopei B.
    Alameh K.
    [J]. Information Processing in Agriculture, 2019, 6 (01): : 116 - 131
  • [52] Wang C.Y., 2022, arXiv
  • [53] Review of Weed Detection Methods Based on Computer Vision
    Wu, Zhangnan
    Chen, Yajun
    Zhao, Bo
    Kang, Xiaobing
    Ding, Yuanyuan
    [J]. SENSORS, 2021, 21 (11)
  • [54] Beyond Precision Weed Control: A Model for True Integration
    Young, Stephen L.
    [J]. WEED TECHNOLOGY, 2018, 32 (01) : 7 - 10