Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields

被引:129
作者
Gao, Junfeng [1 ,2 ,3 ]
French, Andrew P. [3 ,4 ]
Pound, Michael P. [3 ]
He, Yong [5 ]
Pridmore, Tony P. [3 ]
Pieters, Jan G. [2 ]
机构
[1] Univ Lincoln, Lincoln Inst Agrifood Technol, Riseholme Pk, Lincoln LN2 2LG, England
[2] Univ Ghent, Dept Biosyst Engn, Coupure Links 653, B-9000 Ghent, Belgium
[3] Univ Nottingham, Sch Comp Sci, Jubilee Campus,Wollaton Rd, Nottingham NG8 1BB, England
[4] Univ Nottingham, Sch Biosci, Sutton Bonington Campus, Nr Loughborough LE12 5RD, England
[5] Zhejiang Univ, Coll Biosyst Engn & Food Sci, Yuhangtang Rd 866, Hangzhou 310058, Zhejiang, Peoples R China
关键词
Precision farming; Deep learning; Weed detection; Synthetic images; Transfer learning; WEED-CONTROL; CLASSIFICATION; AGRICULTURE; FEATURES; TIME; CROP;
D O I
10.1186/s13007-020-00570-z
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Background Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. Results Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 x 1200) on a NVIDIA Titan X GPU environment. Conclusion The developed model has the potential to be deployed on an embedded mobile platform like the Jetson TX for online weed detection and management due to its high-speed inference. It is recommendable to use synthetic images and empirical field images together in training stage to improve the performance of models.
引用
收藏
页数:12
相关论文
共 46 条
[1]   Visual features based boosted classification of weeds for real-time selective herbicide sprayer systems [J].
Ahmad, Jamil ;
Muhammad, Khan ;
Ahmad, Imran ;
Ahmad, Wakeel ;
Smith, Melvyn L. ;
Smith, Lyndon N. ;
Jain, Deepak Kumar ;
Wang, Haoxiang ;
Mehmood, Irfan .
COMPUTERS IN INDUSTRY, 2018, 98 :23-33
[2]  
[Anonymous], CROP PLANNING USING
[3]  
[Anonymous], INT C AGR ENG
[4]  
[Anonymous], P 2017 2 INT C INF T
[5]   Synthetic bootstrapping of convolutional neural networks for semantic plant part segmentation [J].
Barth, R. ;
IJsselmuiden, J. ;
Hemming, J. ;
Van Henten, E. J. .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2019, 161 :291-304
[6]   Data synthesis methods for semantic segmentation in agriculture: A Capsicum annuum dataset [J].
Barth, R. ;
IJsselmuiden, J. ;
Hemming, J. ;
Van Henten, E. J. .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2018, 144 :284-296
[7]  
Brookes Graham, 2014, GM Crops Food, V5, P321, DOI 10.4161/21645698.2014.958930
[8]   Going deeper in the automated identification of Herbarium specimens [J].
Carranza-Rojas, Jose ;
Goeau, Herve ;
Bonnet, Pierre ;
Mata-Montero, Erick ;
Joly, Alexis .
BMC EVOLUTIONARY BIOLOGY, 2017, 17 :1-14
[9]   Weed and crop discrimination using hyperspectral image data and reduced bandsets [J].
Eddy, P. R. ;
Smith, A. M. ;
Hill, B. D. ;
Peddle, D. R. ;
Coburn, C. A. ;
Blackshaw, R. E. .
CANADIAN JOURNAL OF REMOTE SENSING, 2014, 39 (06) :481-490
[10]  
Everingham M., 2007, International journal of computer vision, DOI DOI 10.1007/s11263-009-0275-4