Deep Orange: Mask R-CNN based Orange Detection and Segmentation

被引:93
作者
Ganesh, P. [1 ]
Volle, K. [2 ]
Burks, T. F. [3 ]
Mehta, S. S. [1 ]
机构
[1] Univ Florida, Dept Mech & Aersospace Engn, Shalimar, FL 32579 USA
[2] Univ Florida, Natl Res Council, Shalimar, FL 32579 USA
[3] Univ Florida, Dept Agr & Biol Engn, Gainesville, FL 32611 USA
来源
IFAC PAPERSONLINE | 2019年 / 52卷 / 30期
关键词
Deep learning; Convolutional neural networks; Multi-modal instance segmentation; CLASSIFICATION; RECOGNITION;
D O I
10.1016/j.ifacol.2019.12.499
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The objective of this work is to detect individual fruits and obtain pixel-wise mask for each detected fruit in an image. To this end, we presents a deep learning approach, named Deep Orange, to detection and pixel-wise segmentation of fruits based on the state-of-the-art instance segmentation framework, Mask R-CNN. The presented approach uses multi-modal input data comprising of RGB and HSV images of the scene. The developed framework is evaluated using images obtained from an orange grove in Citra, Florida under natural lighting conditions. The performance of the algorithm is compared using RGB and RGB+HSV images. Our preliminary findings indicate that inclusion of HSV data improves the precision to 0.9753 from 0.8947, when using RGB data alone. The overall F-1 score obtained using RGB+HSV is close to 0.89. (C) 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
引用
收藏
页码:70 / 75
页数:6
相关论文
共 42 条
[1]  
Abdulla W., 2017, Mask R-CNN for object detection and instance segmentation on keras and tensorflow
[2]  
[Anonymous], 2017, ADV ANIMAL BIOSCI
[3]  
[Anonymous], 2017, ARXIV170803694
[4]  
[Anonymous], 2017, J FIELD ROBOTICS
[5]  
[Anonymous], 2010, Proc. 11th Int. Symp. Comp. Appl. Biot., Leuven, DOI DOI 10.3182/20100707-3-BE-2012.0061
[6]  
[Anonymous], PROC CVPR IEEE
[7]  
[Anonymous], PROC CVPR IEEE
[8]  
[Anonymous], ADV NEURAL INFORM PR, DOI DOI 10.1109/TPAMI.2016.2577031
[9]  
[Anonymous], ADV NEURAL INFORM PR
[10]  
Bargoti Suchet, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P3626, DOI 10.1109/ICRA.2017.7989417