Deep Learning: Individual Maize Segmentation From Terrestrial Lidar Data Using Faster R-CNN and Regional Growth Algorithms

被引:111
作者
Jin, Shichao [1 ,2 ]
Su, Yanjun [1 ]
Gao, Shang [1 ,2 ]
Wu, Fangfang [1 ,2 ]
Hu, Tianyu [1 ]
Liu, Jin [1 ]
Li, Wankai [3 ]
Wang, Dingchang [4 ]
Chen, Shaojiang [4 ]
Jiang, Yuanxi [1 ,5 ]
Pang, Shuxin [1 ]
Guo, Qinghua [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Bot, State Key Lab Vegetat & Environm Change, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Sun Yat Sen Univ, Sch Geog & Planning, Guangdong Prov Key Lab Urbanizat & Geosimulat, Guangzhou, Guangdong, Peoples R China
[4] China Agr Univ, Natl Maize Improvement Ctr China, Beijing, Peoples R China
[5] Beijing City Univ, Urban Construct Sch, Beijing, Peoples R China
基金
美国国家科学基金会; 中国国家自然科学基金; 国家重点研发计划;
关键词
deep learning; detection; classification; segmentation; phenotype; Lidar (light detection and ranging); TREE CROWNS; FOOD SECURITY; PHENOTYPING TECHNOLOGY; STEM VOLUME; DENSITY; SYSTEM;
D O I
10.3389/fpls.2018.00866
中图分类号
Q94 [植物学];
学科分类号
071001 ;
摘要
The rapid development of light detection and ranging (Lidar) provides a promising way to obtain three-dimensional (3D) phenotype traits with its high ability of recording accurate 3D laser points. Recently, Lidar has been widely used to obtain phenotype data in the greenhouse and field with along other sensors. Individual maize segmentation is the prerequisite for high throughput phenotype data extraction at individual crop or leaf level, which is still a huge challenge. Deep learning, a state-of-the-art machine learning method, has shown high performance in object detection, classification, and segmentation. In this study, we proposed a method to combine deep leaning and regional growth algorithms to segment individual maize from terrestrial Lidar data. The scanned 3D points of the training site were sliced row and row with a fixed 3D window. Points within the window were compressed into deep images, which were used to train the Faster R-CNN (region-based convolutional neural network) model to learn the ability of detecting maize stem. Three sites of different planting densities were used to test the method. Each site was also sliced into many 3D windows, and the testing deep images were generated. The detected stem in the testing images can be mapped into 3D points, which were used as seed points for the regional growth algorithm to grow individual maize from bottom to up. The results showed that the method combing deep leaning and regional growth algorithms was promising in individual maize segmentation, and the values of r, p, and F of the three testing sites with different planting density were all over 0.9. Moreover, the height of the truly segmented maize was highly correlated to the manually measured height (R-2 > 0.9). This work shows the possibility of using deep leaning to solve the individual maize segmentation problem from Lidar data.
引用
收藏
页数:10
相关论文
共 57 条
[1]  
[Anonymous], 2015, ARXIV151203131
[2]  
Baweja HarjatinSingh., 2018, Field and service robotics, P271, DOI DOI 10.1007/978-3-319-67361-5_18
[3]   A stereo imaging system for measuring structural parameters of plant canopies [J].
Biskup, Bernhard ;
Scharr, Hanno ;
Schurr, Ulrich ;
Rascher, Uwe .
PLANT CELL AND ENVIRONMENT, 2007, 30 (10) :1299-1308
[4]  
Blair J.B., 1999, Earth Resour. Remote Sensing, V76, P283
[5]   Human population: The next half century [J].
Cohen, JE .
SCIENCE, 2003, 302 (5648) :1172-1175
[6]  
Conneau A, 2017, 15TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2017), VOL 1: LONG PAPERS, P1107
[7]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[8]   Food Security: Crop Species Diversity [J].
Dempewolf, Hannes ;
Bordoni, Paul ;
Rieseberg, Loren H. ;
Engels, Johannes M. M. .
SCIENCE, 2010, 328 (5975) :169-170
[9]   Food security: close crop yield gap [J].
Finger, Robert .
NATURE, 2011, 480 (7375) :39-39
[10]  
Girshick R., 2014, P IEEE C COMP VIS PA, DOI [10.1109/CVPR.2014.81, DOI 10.1109/CVPR.2014.81, 10.1109/cvpr.2014.81]