Semantic Segmentation of Vineyard Images Using Convolutional Neural Networks

被引:10
作者
Kalampokas, Theofanis [1 ]
Tziridis, Konstantinos [1 ]
Nikolaou, Alexandros [1 ]
Vrochidou, Eleni [1 ]
Papakostas, George A. [1 ]
Pachidis, Theodore [1 ]
Kaburlasos, Vassilis G. [1 ]
机构
[1] Int Hellen Univ, Human Machines Interact Lab HUMAIN Lab, Agios Loukas 65404, Kavala, Greece
来源
PROCEEDINGS OF THE 21ST ENGINEERING APPLICATIONS OF NEURAL NETWORKS CONFERENCE, EANN 2020 | 2020年 / 2卷
关键词
Semantic segmentation; Convolutional neural network; Computer vision; Deep learning; Precision agriculture; Harvesting robot;
D O I
10.1007/978-3-030-48791-1_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper aims to study the segmentation demands of vineyard images using Convolutional Neural Networks (CNNs). To this end, eleven CNN models able to provide semantic segmented images are examined as part of the sensing subsystem of an autonomous agricultural robot. The task is challenging due to the similar color between grapes, leaves and image's background. Moreover, the lack of controlled lighting conditions results in varying color representation of grapes and leaves. The studied CNN model architectures combine three different feature learning sub-networks, with five metaarchitectures for segmentation purposes. Investigation on three different datasets consisting of vineyard images of grape clusters and leaves, provided segmentation results, by mean pixel intersection over union (IU) performance index, of up to 87.89% for grape clusters and 83.45% for leaves, for the case of ResNet50_FRRN and MobileNetV2_PSPNet model, respectively. Comparative results reveal the efficacy of CNNs to separate grape clusters and leaves from image's background. Thus, the proposed models can be used for in-field applications for real-time localization of grapes and leaves, towards automation of harvest, green harvest and defoliation agricultural activities by an autonomous robot.
引用
收藏
页码:292 / 303
页数:12
相关论文
共 30 条
[1]   Grapes Visual Segmentation for Harvesting Robots Using Local Texture Descriptors [J].
Badeka, Eftichia ;
Kalabokas, Theofanis ;
Tziridis, Konstantinos ;
Nicolaou, Alexander ;
Vrochidou, Eleni ;
Mavridou, Efthimia ;
Papakostas, George A. ;
Pachidis, Theodore .
COMPUTER VISION SYSTEMS (ICVS 2019), 2019, 11754 :98-109
[2]   The devil is in the details: an evaluation of recent feature encoding methods [J].
Chatfield, Ken ;
Lempitsky, Victor ;
Vedaldi, Andrea ;
Zisserman, Andrew .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, 2011,
[3]  
evtar, Personalized Optimal Grape Harvest by Autonomous Robot (POGHAR)
[4]   Image Style Transfer Using Convolutional Neural Networks [J].
Gatys, Leon A. ;
Ecker, Alexander S. ;
Bethge, Matthias .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2414-2423
[5]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[6]  
Kangune K., 2019, 2019 GLOB C ADV TECH, P1, DOI DOI 10.1109/GCAT47503.2019.8978341
[7]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[8]  
LeCun Y, 2010, IEEE INT SYMP CIRC S, P253, DOI 10.1109/ISCAS.2010.5537907
[9]  
Li N., 2018, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch., V42, P897, DOI [10.5194/isprsarchives-XLII-3-897-2018, DOI 10.5194/ISPRSARCHIVES-XLII-3-897-2018]
[10]   Feature Pyramid Networks for Object Detection [J].
Lin, Tsung-Yi ;
Dollar, Piotr ;
Girshick, Ross ;
He, Kaiming ;
Hariharan, Bharath ;
Belongie, Serge .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :936-944