Synthesizing Training Data for Intelligent Weed Control Systems Using Generative AI

被引:2
作者
Modak, Sourav [1 ]
Stein, Anthony [1 ]
机构
[1] Univ Hohenheim, Dept Artificial Intelligence Agr Engn & Computat, Garbenstr 9, D-70599 Stuttgart, Germany
来源
ARCHITECTURE OF COMPUTING SYSTEMS, ARCS 2024 | 2024年 / 14842卷
关键词
Data augmentation; Generative AI; Foundation models; Intelligent weed control system; Weed detection;
D O I
10.1007/978-3-031-66146-4_8
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Learning already plays a pivotal role in technical systems performing various crop protection tasks, including weed detection, disease diagnosis, and pest monitoring. However, the efficacy of such data-driven models heavily relies on large and high-quality datasets, which are often scarce and costly to acquire in agricultural contexts. To address the overarching challenge of data scarcity, augmentation techniques have emerged as a popular strategy to expand training data amount and variation. Traditional data augmentation methods, however, often fall short in reliably replicating real-world conditions and also lack diversity in the augmented images, hindering robust model training. In this paper, we introduce a novel methodology for synthetic image generation designed specifically for object detection tasks in the agricultural context of weed control. We propose a pipeline architecture for synthetic image generation that incorporates a foundation model called Segment Anything Model (SAM), which allows for zero-shot transfer to new domains, along with the recent generative AI-based Stable Diffusion Model. Our methodology aims to produce synthetic training images that accurately capture characteristic weed and background features while replicating the authentic style and variability inherent in real-world images with high fidelity. In view of the integration of our approach into intelligent technical systems, such a pipeline paves the way for continual self-improvement of the perception modules when put into a self-reflection loop. First experiments on real weed image data from a current research project reveal our method's capability to reconstruct the innate features of real-world weed infested scenes from an outdoor experimental setting.
引用
收藏
页码:112 / 126
页数:15
相关论文
共 36 条
[1]  
Boysen J., 2022, 42 GILJAHRESTAGUNG K, P63
[2]   Modeling the soil-machine response of secondary tillage: A deep learning approach [J].
Boysen, Jonas ;
Zender, Lucas ;
Stein, Anthony .
SMART AGRICULTURAL TECHNOLOGY, 2023, 6
[3]   A Survey on Generative Diffusion Models [J].
Cao, Hanqun ;
Tan, Cheng ;
Gao, Zhangyang ;
Xu, Yilun ;
Chen, Guangyong ;
Heng, Pheng-Ann ;
Li, Stan Z. .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) :2814-2830
[4]   The Segment Anything Model (SAM) for accelerating the smart farming revolution [J].
Carraro, Alberto ;
Sozzi, Marco ;
Marinello, Francesco .
SMART AGRICULTURAL TECHNOLOGY, 2023, 6
[5]   Synthetic data augmentation by diffusion probabilistic models to enhance weed recognition [J].
Chen, Dong ;
Qi, Xinda ;
Zheng, Yu ;
Lu, Yuzhen ;
Huang, Yanbo ;
Li, Zhaojian .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 216
[6]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[7]   Enlarging smaller images before inputting into convolutional neural network: zero-padding vs. interpolation [J].
Hashemi, Mahdi .
JOURNAL OF BIG DATA, 2019, 6 (01)
[8]  
Heusel M, 2017, ADV NEUR IN, V30
[9]  
Ho J, 2020, P 34 INT C NEUR INF, P6840
[10]  
Ho JAT, 2022, Arxiv, DOI [arXiv:2207.12598, 10.48550/arXiv.2207.12598]