Learning Semantic Graphics Using Convolutional Encoder-Decoder Network for Autonomous Weeding in Paddy

被引:54
作者
Adhikari, Shyam Prasad [1 ]
Yang, Heechan [2 ]
Kim, Hyongsuk [1 ,2 ]
机构
[1] Chonbuk Natl Univ, Div Elect Engn, IRRC, Jeonju, South Korea
[2] Chonbuk Natl Univ, Div Elect & Informat Engn, Jeonju, South Korea
基金
新加坡国家研究基金会;
关键词
semantic graphics; convolutional neural network; autonomous weeding; crop line extraction; encoder-decoder network; CROP ROWS; IMAGE-ANALYSIS; MAIZE FIELDS; IDENTIFICATION; CLASSIFICATION; SYSTEM; ROBOT;
D O I
10.3389/fpls.2019.01404
中图分类号
Q94 [植物学];
学科分类号
071001 ;
摘要
Weeds in agricultural farms are aggressive growers which compete for nutrition and other resources with the crop and reduce production. The increasing use of chemicals to control them has inadvertent consequences to the human health and the environment. In this work, a novel neural network training method combining semantic graphics for data annotation and an advanced encoder-decoder network for (a) automatic crop line detection and (b) weed (wild millet) detection in paddy fields is proposed. The detected crop lines act as a guiding line for an autonomous weeding robot for inter-row weeding, whereas the detection of weeds enables autonomous intra-row weeding. The proposed data annotation method, semantic graphics, is intuitive, and the desired targets can be annotated easily with minimal labor. Also, the proposed "extended skip network" is an improved deep convolutional encoder-decoder neural network for efficient learning of semantic graphics. Quantitative evaluations of the proposed method demonstrated an increment of 6.29% and 6.14% in mean intersection over union (mIoU), over the baseline network on the task of paddy line detection and wild millet detection, respectively. The proposed method also leads to a 3.56% increment in mIoU and a significantly higher recall compared to a popular bounding box-based object detection approach on the task of wild-millet detection.
引用
收藏
页数:12
相关论文
共 51 条
[1]  
Abadi M., 2015, P 12 USENIX S OPERAT
[2]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[3]   A vision based row detection system for sugar beet [J].
Bakker, Tijmen ;
Wouters, Hendrik ;
van Asselt, Kees ;
Bontsema, Jan ;
Tang, Lie ;
Mueller, Joachim ;
van Straten, Gerrit .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2008, 60 (01) :87-95
[4]   What's the Point: Semantic Segmentation with Point Supervision [J].
Bearman, Amy ;
Russakovsky, Olga ;
Ferrari, Vittorio ;
Fei-Fei, Li .
COMPUTER VISION - ECCV 2016, PT VII, 2016, 9911 :549-565
[5]  
Berg A.C., 2015, ARXIV150604579
[6]  
Chen H, 2016, AAAI CONF ARTIF INTE, P1160
[7]  
Chen L. C, 2017, ARXIV170605587 CORN
[8]   Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields [J].
Choi, Keun Ha ;
Han, Sang Kwon ;
Han, Sang Hoon ;
Park, Kwang-Ho ;
Kim, Kyung-Soo ;
Kim, Soohyun .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2015, 113 :266-274
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
Dyrmann M., 2017, Advances in Animal Biosciences, V8, P842, DOI [DOI 10.1017/S2040470017000206, 10.1017/s2040470017000206, 10.1017/S2040470017000206]