Posture change recognition of lactating sow by using 2D-3D convolution feature fusion

被引:0
作者
Xue Y. [1 ]
Li S. [1 ]
Zheng C. [2 ]
Gan H. [1 ]
Li C. [1 ]
Liu H. [1 ]
机构
[1] College of Electronic Engineering, South China Agricultural University, Guangzhou
[2] College of Mathematics and Informatics, South China Agricultural University, Guangzhou
来源
Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering | 2021年 / 37卷 / 09期
关键词
Action recognition; Convolution; Neural networks; Posture change; Spatiotemporal feature fusion; Temporal localization;
D O I
10.11975/j.issn.1002-6819.2021.09.026
中图分类号
学科分类号
摘要
Posture change of lactating sow directly determines the preweaning survival rate of piglets. Automated recognition of sow posture change can make early warning possible to improve the survival rate of piglets. The frequency, type, and duration of sow posture changes can be expected to select the sows with high maternal quality as breeding pigs. But it is difficult to accurately recognize actions of sow posture change, due to the variety of posture changes, as well as the differences of range and duration of the movement. In this study, a convolutional network (2D+3D-CNet, 2D+3D convolutional Network) coupled with 2D-3D convolution feature fusion was proposed to recognize actions of sow posture change in-depth images. Experimental data was collected from a commercial pig farm in Foshan City, Guangdong Province of South China. A Kinect 2.0 camera was fixed directly above the pen to record daily activities of sows with a top view and a video frame of 5 fps. RGB-D video collection was conducted with a depth image resolution of 512×424 pixels. Median filtering and histogram equalization were used to process the dataset. The video clips were then fed into 2D+3D-CNet for training and testing. 2D+3D-CNet included spatiotemporal and spatial feature extraction, feature fusion, action recognition, and postures classification. This approach was adopted to fully integrate the video-level action recognition and frame-level posture classification. Firstly, 16-frame video clips were fed into the network, and then 3D ResNeXt-50 and Darknet-53 were used to extract the spatiotemporal and spatial features during sow movement. A SE module was added to the residual network structure of 3D ResNeXt-50, named 3D SE- ResNeXt-50, to boost the representation power of the network. The sow bounding box and the probability of posture changes were generated from the action recognition after feature fusion. The sow bounding box was then mapped to Darknet-53, where the 13th convolutional layer feature was processed for the sow regional feature maps. Next, the sow regional feature maps were fed into postures classification to finally obtain four probabilities of the posture. Considering the spatiotemporal motion and inter-frame postures variation during sow posture change, the action score was designed to indicate the possibility of posture change, and the threshold was set to determine the start and end time of a posture change action of a sow. Since the start and end time were determined, the specific posture change was classified via combining with the posture of sow one second before the start time, and one second after the end time. The method can be expected to directly recognize a specific posture change action of sow without a large number of datasets to be collected and annotated. The 2D+3D-CNet model was trained using PyTorch deep learning framework on an NVIDIA RTX 2080Ti GPU (graphics processing units), while the algorithm was developed on Ubuntu 16.04 platform. The performance of the algorithm was evaluated on the test set. The classification accuracies of lateral lying, standing, sitting, and ventral lying were 100%, 98.69%, 98.24%, and 98.19%, respectively. The total recognition accuracy of sow posture change actions was 97.95%, while the total recall rate was 91.67%, and the inference speed was 14.39 frames/s. The accuracies increased by 5.06 and 5.53 percentage points, and the recall rate increased by 3.65 and 5.90 percentage points, respectively, compared YOWO, and MOC-D Although 2D+ 3D-CNet's model size was bigger than FRCNN-HMM, it had some advantages in the accuracy, recall and test speed. The presented method can remove hand-crafted features to achieve real-time inference and more accurate action localization. © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
引用
收藏
页码:230 / 237
页数:7
相关论文
共 31 条
[1]  
Wischner D, Kemper N, Stamer E, Et al., Characterisation of sows' postures and posture changes with regard to crushing piglets, Applied Animal Behaviour Science, 119, 1, pp. 49-55, (2009)
[2]  
Alonso-Spilsbury M, Ramirez-Necoechea R, Gonzalez-Lozano M, Et al., Piglet survival in early lactation: A review, Journal of Animal and Veterinary Advances, 6, 1, pp. 76-86, (2007)
[3]  
Weary D M, Pajor E A, Fraser D, Et al., Sow body movements that crush piglets: A comparison between two types of farrowing accommodation, Applied Animal Behaviour Science, 49, 2, pp. 149-158, (1996)
[4]  
Marchant J N, Broom D M J A., Factors affecting posture-changing in loose-housed and confined gestating sows, Animal Science, 63, 3, pp. 477-485, (1997)
[5]  
Yan Li, Shen Mingxia, Xie Qiuju, Et al., Research on recognition method of lactating sows' dangerous body movement, Transactions of the Chinese Society for Agricultural Machinery, 47, 1, pp. 266-272, (2016)
[6]  
Wang Kai, Liu Chunhong, Duan Qingling, Identification of sow oestrus behavior based on MFO-LSTM, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 36, 14, pp. 211-219, (2020)
[7]  
Thompson R, Matheson S M, Plotz T, Et al., Porcine lie detectors: Automatic quantification of posture state and transitions in sows using inertial sensors, Computers and Electronics in Agriculture, 127, pp. 521-530, (2016)
[8]  
Nasirahmadi A, Sturm B, Olsson A C, Et al., Automatic scoring of lateral and sternal lying posture in grouped pigs using image processing and Support Vector Machine, Computers and Electronics in Agriculture, 156, pp. 475-481, (2019)
[9]  
Leonard S M, Xin H, Brown-Brandl T M, Et al., Development and application of an image acquisition system for characterizing sow behaviors in farrowing stalls, Computers and Electronics in Agriculture, 163, (2019)
[10]  
Riekert M, Klein A, Adrion F, Et al., Automatically detecting pig position and posture by 2D camera imaging and deep learning, Computers and Electronics in Agriculture, 174, (2020)