Classification of perch ingesting condition using lightweight neural network MobileNetV3-Small

被引:0
|
作者
Zhu M. [1 ,3 ]
Zhang Z. [1 ,2 ]
Huang H. [1 ,2 ]
Chen Y. [1 ]
Liu Y. [1 ]
Dong T. [1 ]
机构
[1] College of Engineering, Huazhong Agricultural University, Wuhan
[2] Key Laboratory of Agricultural Equipment in Mid-lower Yangtze River, Ministry of Agriculture and Rural Affairs, Wuhan
[3] Engineering Research Center of Green Development for Conventional Aquatic Biological Industry in the Yangtze River Economic Belt, Ministry of Education, Wuhan
来源
Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering | 2021年 / 37卷 / 19期
关键词
Aquaculture; Deep learning; Image recognition; Machine vision; Neural network; Perch;
D O I
10.11975/j.issn.1002-6819.2021.19.019
中图分类号
学科分类号
摘要
Intelligent feeding has widely been used to determine the amount of feed from a smart prediction about the hunger degree of fish, thereby effectively reducing the waste of feed in the modern aquaculture industry, especially for outdoor intensive fish breeding environments. However, redundant data collected by mobile monitoring devices has caused a huge calculation load for most control systems. An accurate classification of the hunger degree of fish still remains an unsolved problem. Taking the captive perch as the tested object, this work aims to design an image capture system for the perch feeding using MobileNetV3-Small of lightweight neural network. The system also consisted of 2 captive fonds, a camera, and a video recorder. In the test, 4202 perches were randomly fed with adequate or inadequate feed, where a camera was selected to record the water surface every day. 10 000 images were collected after 2-week monitoring to record the perch ingesting condition in the period of 80~110 seconds after per round feeding condition, where 50% belonged to "hungry" condition, and the rest was "non-hungry" condition. These initial images were then divided as training, validation, and testing set, according to a rate of 6:2:2. Four image processing operations were applied on the training set, containing random flipping, random cropping, adding Gaussian noise, and color dithering, thereby expanding the training set from 6 000 to 12 000 images. As such, the more generalized model greatly enhanced the image features and training samples. Next, a MobileNetV3-Small of lightweight Neural Network was selected to classify the ingesting condition of perches. The model was trained, tested, and established on the Tensorflow2 platform, where the images of the training set were selected as the input, whereas, the ingesting condition as the output. Finally, a 2-week feeding contrast test was carried out in the outdoor culture environment to verify the accuracy of the model. Two groups were set for 4202 perches in this test, 2096 of the test group and 2106 of the control group, where the amount of feed was determined according to the classification of model and conventional experience. Meanwhile, the total mass and quantity of the two groups were recorded at the beginning and end of the test, as well as the total amount of consumed feed. Correspondingly, it was found that the MobileNetV3-Small network model achieved a combined accuracy of 99.60% in the test set with an F1 score of 99.60%. The MobileNetV3-Small model presented the smallest Floating Point Operations of 582 M and the largest average classification rate of 39.21 frames/s, compared with ResNet-18, ShuffleNetV2, and MobileNetV3-Large deep learning models. Specifically, the combined accuracies of the MobileNetV3-Small model were 12.74, 23.85, 3.6, and 2.78 percentage points higher than that of the traditional machine learning models KNN, SVM, GBDT, and Stacking. Furthermore, the test group of perch was achieved a lower Feed Conversion Ratio of 1.42, and a higher Weight Gain Ratio of 5.56%, compared with the control group, indicating that the MobileNetV3-Small model performed a better classification on the ingesting condition in a real outdoor culture environment. Consequently, the classification of the ingesting condition can widely be expected for the efficient decision-making for the amount of fish feed, particularly suitable for the growth of fish. The finding can provide a further reference for efficient and intelligent feeding in an intensive cultural environment. © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
引用
收藏
页码:165 / 172
页数:7
相关论文
共 30 条
  • [1] Zhang Jialin, Xu Lihong, Liu Shijing, Classification of Atlantic salmon feeding behavior based on underwater machine vision, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 36, 13, pp. 158-164, (2020)
  • [2] Guo Qiang, Fish Feeding Behavior Detection Method Under Circulating Aquaculture Based on Computer Vision, (2018)
  • [3] Barraza-Guardado R H, Martinez-Cordova L R, Enriquez-Ocana L F, Et al., Effect of shrimp farm effluent on water and sediment quality parameters off the coast of Sonora, Mexico, Ciencias Marinas, 40, 4, pp. 221-235, (2014)
  • [4] Papadakis V M, Papadakis I E, Lamprianidou F, Et al., A computer-vision system and methodology for the analysis of fish behavior, Aquacultural Engineering, 46, pp. 53-59, (2012)
  • [5] Pautsina A, Cisar P, Stys D, Et al., Infrared reflection system for indoor 3D tracking of fish, Aquacultural Engineering, 69, pp. 7-17, (2015)
  • [6] Qiao Feng, Zheng Ti, Hu Liyong, Et al., Research on smart bait casting machine based on machine vision technology, Chinese Journal of Engineering Design, 22, 6, pp. 528-533, (2015)
  • [7] Guo Jun, Research on Feeding Patterns and Bait Technology of Fish Culture Based on Information of Image and Sound, (2018)
  • [8] Atoum Y, Srivastava S, Liu X M., Automatic feeding control for dense aquaculture fish tanks, IEEE Signal Processing Letters, 22, 8, pp. 1089-1093, (2014)
  • [9] Chen Ming, Zhang Chongyang, Feng Guofu, Et al., Intensity assessment method of fish feeding activities based on feature weighted fusion, Transactions of the Chinese Society for Agricultural Machinery, 51, 2, pp. 245-253, (2020)
  • [10] Krizhevsky A, Sutskever I, Hinton G E., ImageNet classification with deep convolutional neural networks, Communications of the ACM, 60, 6, pp. 84-90, (2017)