Learning Semantic Graphics Using Convolutional Encoder-Decoder Network for Autonomous Weeding in Paddy

被引:54
作者
Adhikari, Shyam Prasad [1 ]
Yang, Heechan [2 ]
Kim, Hyongsuk [1 ,2 ]
机构
[1] Chonbuk Natl Univ, Div Elect Engn, IRRC, Jeonju, South Korea
[2] Chonbuk Natl Univ, Div Elect & Informat Engn, Jeonju, South Korea
基金
新加坡国家研究基金会;
关键词
semantic graphics; convolutional neural network; autonomous weeding; crop line extraction; encoder-decoder network; CROP ROWS; IMAGE-ANALYSIS; MAIZE FIELDS; IDENTIFICATION; CLASSIFICATION; SYSTEM; ROBOT;
D O I
10.3389/fpls.2019.01404
中图分类号
Q94 [植物学];
学科分类号
071001 ;
摘要
Weeds in agricultural farms are aggressive growers which compete for nutrition and other resources with the crop and reduce production. The increasing use of chemicals to control them has inadvertent consequences to the human health and the environment. In this work, a novel neural network training method combining semantic graphics for data annotation and an advanced encoder-decoder network for (a) automatic crop line detection and (b) weed (wild millet) detection in paddy fields is proposed. The detected crop lines act as a guiding line for an autonomous weeding robot for inter-row weeding, whereas the detection of weeds enables autonomous intra-row weeding. The proposed data annotation method, semantic graphics, is intuitive, and the desired targets can be annotated easily with minimal labor. Also, the proposed "extended skip network" is an improved deep convolutional encoder-decoder neural network for efficient learning of semantic graphics. Quantitative evaluations of the proposed method demonstrated an increment of 6.29% and 6.14% in mean intersection over union (mIoU), over the baseline network on the task of paddy line detection and wild millet detection, respectively. The proposed method also leads to a 3.56% increment in mIoU and a significantly higher recall compared to a popular bounding box-based object detection approach on the task of wild-millet detection.
引用
收藏
页数:12
相关论文
共 51 条
[21]   The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation [J].
Jegou, Simon ;
Drozdzal, Michal ;
Vazquez, David ;
Romero, Adriana ;
Bengio, Yoshua .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1175-1183
[22]   Wheat rows detection at the early growth stage based on Hough transform and vanishing point [J].
Jiang, Guoquan ;
Wang, Xiaojie ;
Wang, Zhiheng ;
Liu, Hongmin .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2016, 123 :211-223
[23]  
Jin J, 2014, ARXIV14125474 CORN U
[24]  
Kingma DP, 2014, ARXIV
[25]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[26]   Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data [J].
Kussul, Nataliia ;
Lavreniuk, Mykola ;
Skakun, Sergii ;
Shelestov, Andrii .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2017, 14 (05) :778-782
[27]   ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation [J].
Lin, Di ;
Dai, Jifeng ;
Jia, Jiaya ;
He, Kaiming ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3159-3167
[28]  
Long J, 2015, PROC CVPR IEEE, P3431, DOI 10.1109/CVPR.2015.7298965
[29]   Large-Scale Transportation Network Congestion Evolution Prediction Using Deep Learning Theory [J].
Ma, Xiaolei ;
Yu, Haiyang ;
Wang, Yunpeng ;
Wang, Yinhai .
PLOS ONE, 2015, 10 (03)
[30]   A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation [J].
Mayer, Nikolaus ;
Ilg, Eddy ;
Hausser, Philip ;
Fischer, Philipp ;
Cremers, Daniel ;
Dosovitskiy, Alexey ;
Brox, Thomas .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :4040-4048