Synthetic data augmentation by diffusion probabilistic models to enhance weed recognition

被引:16
作者
Chen, Dong [1 ]
Qi, Xinda [1 ]
Zheng, Yu [1 ]
Lu, Yuzhen [2 ]
Huang, Yanbo [3 ]
Li, Zhaojian [4 ]
机构
[1] Michigan State Univ, Dept Elect & Comp Engn, E Lansing, MI 48824 USA
[2] Michigan State Univ, Dept Biosyst & Agr Engn, E Lansing, MI 48824 USA
[3] USDA ARS, Genet & Sustainable Agr Res Unit, Starkville, MS 39762 USA
[4] Michigan State Univ, Dept Mech Engn, E Lansing, MI 48824 USA
关键词
Computer vision; Data augmentation; Deep learning; Generative modeling; Precision weed management; Site-specific weed control; GENERATIVE ADVERSARIAL NETWORKS;
D O I
10.1016/j.compag.2023.108517
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Weed management plays an important role in crop yield and quality protection. Conventional weed control methods largely rely on intensive, blanket herbicide application, which incurs significant management costs and poses hazards to the environment and human health. Machine vision-based automated weeding has gained increasing attention for sustainable weed management through weed recognition and site-specific treatments. However, it remains a challenging task to reliably recognize weeds in variable field conditions, in part due to the difficulty curating large-scale, expert-labeled weed image datasets for supervised training of weed recognition algorithms. Data augmentation methods, including traditional geometric/color transformations and more advanced generative adversarial networks (GANs) can supplement data collection and labeling efforts by algorithmically expanding the scale of datasets. Recently, diffusion models have emerged in the field of image synthesis, providing a new means for augmenting image datasets to power machine vision systems. This study presents a novel investigation of the efficacy of diffusion models for generating weed images to enhance weed identification. Experiments on two public multi-class large weed datasets showed that diffusion models yielded the best trade-off between sample fidelity and diversity and obtained the highest Fre ' chet Inception Distance, compared to GANs (BigGAN, StyleGAN2, StyleGAN3). For instance, on a ten-class weed dataset (CottonWeedID10), the inclusion of synthetic weed images led to improvements by 1.17% (97.30% to 98.47), 1.21% (97.92% to 99.13%), and 2.30% (96.06% to 98.27%) in accuracy, precision, and recall, respectively, in weed classification by four deep learning models (i.e., VGG16, Inception-v3, Inception-v3, and ResNet50). Models trained using only 10% of real images with the remainder being synthetic data resulted in testing accuracy exceeding 94%.
引用
收藏
页数:12
相关论文
共 67 条
  • [1] Tomato plant disease detection using transfer learning with C-GAN synthetic images
    Abbas, Amreen
    Jain, Sweta
    Gour, Mahesh
    Vankudothu, Swetha
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 187
  • [2] Performance of deep learning models for classifying and detecting common weeds in corn and soybean production systems
    Ahmad, Aanis
    Saraswat, Dharmendra
    Aggarwal, Varun
    Etienne, Aaron
    Hancock, Benjamin
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 184 (184)
  • [3] [Anonymous], 1989, Statistics
  • [4] Performance evaluation of deep transfer learning on multi-class identification of common weed species in cotton production systems
    Chen, Dong
    Lu, Yuzhen
    Li, Zhaojian
    Young, Sierra
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2022, 198
  • [5] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [6] Herbicides based on pelargonic acid: Herbicides of the bioeconomy
    Ciriminna, Rosaria
    Fidalgo, Alexandra
    Ilharco, Laura M.
    Pagliaro, Mario
    [J]. BIOFUELS BIOPRODUCTS & BIOREFINING-BIOFPR, 2019, 13 (06): : 1476 - 1482
  • [7] Generative Adversarial Networks An overview
    Creswell, Antonia
    White, Tom
    Dumoulin, Vincent
    Arulkumaran, Kai
    Sengupta, Biswa
    Bharath, Anil A.
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) : 53 - 65
  • [8] Dang F., 2022, 2022 ASABE ANN INTER, P1
  • [9] Dhariwal P, 2021, ADV NEUR IN, V34
  • [10] RepVGG: Making VGG-style ConvNets Great Again
    Ding, Xiaohan
    Zhang, Xiangyu
    Ma, Ningning
    Han, Jungong
    Ding, Guiguang
    Sun, Jian
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13728 - 13737