Crop disease classification with transfer learning and residual networks

被引:0
作者
Wang D. [1 ]
Wang J. [1 ]
机构
[1] School of Computer and Information, Anhui Polytechnic University, Wuhu
来源
Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering | 2021年 / 37卷 / 04期
关键词
Crop; Deep learning; Disease; Residual networks; Transfer learning;
D O I
10.11975/j.issn.1002-6819.2021.04.024
中图分类号
学科分类号
摘要
Crop diseases have posed a major threat to food production and the agricultural ecosystem worldwide. Automatic detection and classification of crop diseases are highly urgent to facilitate targeted and timely disease control in modern agriculture. The complex crop disease is ever-deepening the difficulty of feature extraction in traditional machine learning, where the human selection of features requires lots of experiments and experience. Alternatively, the convolutional neural network (CNN) can automatically extract relevant features from the input images, thereby imitating the human visual process. The learned features are also the key to the success of deep learning. However, the CNN identification model is difficult to deal with the complex planting structure in the actual production, if only a single species is identified. In this study, a feasible classification of crop disease was proposed using the improved deep residual network (SE-ResNeXt-101) and transfer learning (TL-SE-ResNeXt-101) for the detection task without specifying crop species. The dataset of crop disease was used from the AI Challenger 2018 competition, containing 27 types of diseases on 10 plant species: apple, cherry, corn, grape, citrus, peach, pepper, potato, strawberry, and tomato. In the reconstructed dataset, the "crop-disease" approach was utilized to extract 33 categories of crop diseases with a total of 35 332 leaves images of different sizes. The training set, validation set, and test set were divided at the ratio of 8: 1: 1. In the test, the dataset was first preprocessed to uniformly transform the images of different sizes into 224×224×3, and then the image pixels were de-averaged and normalized to reduce the computational effort, in order to prevent the gradient explosion in the training of deep learning models for the convergence. Data enhancement techniques were used to increase the diversity of samples, including color enhancement, random angle, random crop, and horizontal random flip. The color enhancement was utilized to adjust the brightness, contrast, saturation, and chromaticity. The random angle was to rotate the image randomly between -15°and 15°. The random crop was to obtain a portion of the image arbitrarily between the scales of 0.1 to 1, and then convert it to 224×224 image size. The horizontal random flip was to randomly flip the image into a mirror image. The randomization probability of all data enhancement was 50%. The proposed model was compared with the VGG-16, GoogLeNet, ResNet-50, and DenseNet-121 models under the same experimental conditions. The experimental results show that the transfer learning significantly improved the convergence speed and classification performance of the model, thereby training a better model in a shorter period. Data enhancement techniques reduced the dependence of the model on certain attributes without overfitting for better performance and generalization. The average accuracy rate and weighted F1 value reached 98% and 97.99% respectively in the TL-SE-ResNeXt-101 model, better than other models. The TL-SE-ResNeXt-101 model also had an excellent classification effect on the image samples in real agricultural production. The average accuracy rate of the TL-SE-ResNext-101 model reached 47.37% on the PlantDoc test set, indicating strong robustness to serve as a promising technology for crop disease identification. © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
引用
收藏
页码:199 / 207
页数:8
相关论文
共 30 条
[1]  
Phadikar S, Sil J., Rice disease identification using pattern recognition techniques, International Conference on Computer and Information Technology, pp. 420-423, (2008)
[2]  
Zhang Shanwen, Zhang Chuanlei, Maize disease recognition based on local discriminant algorithm, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 30, 11, pp. 167-172, (2014)
[3]  
Arnal Barbedo J G., Digital image processing techniques for detecting, quantifying and classifying plant diseases, SpringerPlus, 2, 1, pp. 1-12, (2013)
[4]  
Chaudhary P, Chaudhari A K, Cheeran A N, Et al., Color transform based approach for disease spot detection on plant leaf, International Journal of Computer Science and Telecommunications, 3, 6, pp. 65-70, (2012)
[5]  
Jia Deng, Wei Dong, Scocher R, Et al., Imagenet: A large-scale hierarchical image database, 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255, (2009)
[6]  
Krizhevsky A, Sutskever I, Hinton G E., Imagenet classification with deep convolutional neural networks, International Conference on Neural Information Processing Systems, pp. 1097-1105, (2012)
[7]  
Ding Changxing, Tao Dacheng, Trunk-Branch Ensemble Convolutional Neural Networks for Video-Based Face Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 4, pp. 1002-1014, (2018)
[8]  
Hourri S, Kharroubi J., A deep learning approach for speaker recognition, International Journal of Speech Technology, 23, 2, pp. 123-131, (2020)
[9]  
Hourri S, Nikolov N S, Kharroubi J., A deep learning approach to integrate convolutional neural networks in speaker recognition, International Journal of Speech Technology, 23, pp. 615-623, (2020)
[10]  
Li Liu, Ouyang Wanli, Wang Xiaogang, Et al., Deep learning for generic object detection: A survey, International Journal of Computer Vision, 128, 2, pp. 261-318, (2020)