An Efficient Approach to Fruit Classification and Grading using Deep Convolutional Neural Network

被引:0
作者
Pande, Aditi [1 ]
Munot, Mousami [1 ]
Sreeemathy, R. [1 ]
Bakare, R., V [1 ]
机构
[1] Univ Pune, Dept Elect & Telecommun, Pune Inst Comp Technol, Pune, Maharashtra, India
来源
2019 IEEE 5TH INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT) | 2019年
关键词
Fruit Classification; Fruit Grading; Background Removal; Convolutional Neural Network; Inception V3; socket programming;
D O I
10.1109/i2ct45611.2019.9033957
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In India, the agricultural industry has seen a boom in recent years, demanding an increased inclusion of automation in it. An important aspect of this agro-automation is grading and classification of agricultural produce. These labor intensive tasks can be automated by use of Computer Vision and Machine Learning. This paper focuses on developing a standalone system capable of classifying 3 types of fruit and taking apple as test case of grading. The fruit types include apple, orange, pear and lemon. Further, apples have been graded into four grades, Grade 1 being the best quality apple and Grade 4 consisting of the spoilt ones. Input is given in the form of fruit image. The involved methodology is dataset formation, preprocessing, software as well as hardware implementations and classification. Preprocessing consists of background removal and segmentation techniques in order to extract fruit area. Deep Convolutional Neural Network has been chosen for the real time implementation of system and applied on fruit 360 dataset. For that purpose, the Inception V3 model is trained using the transfer training approach, thus enabling it to distinguish fruit images. The results after experimentation show that the Top 5 accuracy on the dataset used is 90% and the Top 1 accuracy is 85% which targets accuracy limitation of previous attempts.
引用
收藏
页数:7
相关论文
共 31 条
[1]   AN INTRODUCTION TO KERNEL AND NEAREST-NEIGHBOR NONPARAMETRIC REGRESSION [J].
ALTMAN, NS .
AMERICAN STATISTICIAN, 1992, 46 (03) :175-185
[2]  
[Anonymous], GEN CHAR GRAD
[3]  
Arivazhagan S., 2010, J. Emerg. Trends Comput. Inf. Sci, V1, P90
[4]  
Barghout L, 2015, STUD BIG DATA, V10, P285, DOI 10.1007/978-3-319-16829-6_12
[5]  
Brownlee Jason., 2016, Naive Bayes for Machine Learning
[6]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[7]  
Dai Q., 2016, International Journal of Database Theory and Application, V9, P1, DOI [DOI 10.14257/IJDTA.2016.9.5.01, https://doi.org/10.14257/ijdta.2016.9.5.01]
[8]   Training invariant support vector machines [J].
Decoste, D ;
Schölkopf, B .
MACHINE LEARNING, 2002, 46 (1-3) :161-190
[9]  
Deokar Siddharth, 2009, WEIGHTED K NEAREST N
[10]  
Dhir Renu, 2014, EFFICIENT SEGMENTATI