Counting piglet suckling events using deep learning-based action density estimation

被引:4
|
作者
Gan, Haiming [1 ,2 ,3 ,4 ]
Guo, Jingfeng [1 ]
Liu, Kai [3 ]
Deng, Xinru [1 ]
Zhou, Hui [1 ]
Luo, Dehuan [1 ]
Chen, Shiyun [1 ]
Norton, Tomas [2 ]
Xue, Yueju [1 ,4 ,5 ]
机构
[1] South China Agr Univ, Coll Elect Engn, Guangzhou 510642, Guangdong, Peoples R China
[2] Katholieke Univ Leuven KU LEUVEN, Fac Biosci Engn, Kasteelpk Arenberg 30, B-3001 Leuven, Belgium
[3] City Univ Hong Kong, Jockey Club Coll Vet Med & Life Sci, Dept Infect Dis & Publ Hlth, Hong Kong, Peoples R China
[4] Guangdong Lab Lingnan Modern Agr, Guangzhou 510642, Guangdong, Peoples R China
[5] South China Agr Univ, Coll Elect Engn, Wushan 483, Guangzhou, Guangdong, Peoples R China
基金
中国博士后科学基金;
关键词
Piglet suckling behaviour; Action density; Two -stream network; Precision livestock farming; AUTOMATED DETECTION; BEHAVIOR; SOWS; RECOGNITION; PIGS;
D O I
10.1016/j.compag.2023.107877
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Analysis of piglet suckling behaviour is important for the evaluation of piglet nutrient ingestion, health, welfare, and affinity with the sow. In this study, an action density estimation network (ADEN) was proposed for counting the events of piglet suckling followed by automated analysis of suckling behaviour. ADEN is a two-stream network primarily composed of 1) a network stream that processes video images with a higher frame rate (faster stream) and 2) a network stream that processes video images with a lower frame rate (slower stream). Each stream consists of a ResNet-50 with five convolutional stages. A multi-stage attention connection (MSAC), composed of four Spatial-Temporal-Channel (STC) multi-attention structures, is proposed to bridge Faster Stream and Slower Stream and capture discriminative features. The output attention features from each faster stream stage are laterally fused into the corresponding slower stream stage in a concatenating manner. Following this, the features from the last convolutional stages in the Slow stream and Fast stream are fused using concatenation and are then decoded by using three convolutional layers. The last convolutional layer outputs a heatmap on the action density of piglet suckling behaviour. Finally, the number of suckling events is predicted by integrating all the pixel values in the heatmap. Experimental and comparative tests were conducted to validate the effectiveness of the proposed ADEN with a training dataset and a test dataset from 14 pig pens. The 507 video clips (126,750 images for 7 h) from the 1-9th pens were selected as training datasets. The 143 video clips (35,750 images for 2 h) from the 10-13th pens were selected as short-term test datasets. One untrimmed video (162,000 images for 9 h) from the 14th pen was used to ultimately evaluate the action density estimation performance of the ADEN. ADEN was compared with seven approaches and its superiority was demonstrated with an r = 0.938, an RMSE = 1.080, and a MAE = 0.967 in short video clips and r = 0.982, MAE = 0.161, and RMSE = 0.563 in the untrimmed long video. The ADEN proved it feasible to predict the number of suckling events by using action density estimation.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] A Deep Learning-Based Text Classification of Adverse Nursing Events
    Lu, Wenjing
    Jiang, Wei
    Zhang, Na
    Xue, Feng
    JOURNAL OF HEALTHCARE ENGINEERING, 2021, 2021
  • [42] Improving Time Study Methods Using Deep Learning-Based Action Segmentation Models
    Gudlin, Mihael
    Hegedic, Miro
    Golec, Matija
    Kolar, Davor
    APPLIED SCIENCES-BASEL, 2024, 14 (03):
  • [43] Deep Learning-Based Action Classification Using One-Shot Object Detection
    Yoo, Hyun
    Lee, Seo-El
    Chung, Kyungyong
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (02): : 1343 - 1359
  • [44] Deep Learning-Based Action Detection in Untrimmed Videos: A Survey
    Vahdani, Elahe
    Tian, Yingli
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4302 - 4320
  • [45] Estimation of flea beetle damage in the field using a multistage deep learning-based solution
    Bereciartua-Perez, Arantza
    Monzon, Maria
    Mugica, Daniel
    De Both, Greta
    Baert, Jeroen
    Hedges, Brittany
    Fox, Nicole
    Echazarra, Jone
    Navarra-Mestre, Ramon
    ARTIFICIAL INTELLIGENCE IN AGRICULTURE, 2024, 13 : 18 - 31
  • [46] Plant Counting of Cotton from UAS Imagery Using Deep Learning-Based Object Detection Framework
    Oh, Sungchan
    Chang, Anjin
    Ashapure, Akash
    Jung, Jinha
    Dube, Nothabo
    Maeda, Murilo
    Gonzalez, Daniel
    Landivar, Juan
    REMOTE SENSING, 2020, 12 (18)
  • [47] Deep Learning-Based Eye Gaze Estimation for Automotive Applications Using Knowledge Distillation
    Orasan, Ioan Lucan
    Bublea, Adrian-Ioan
    Caleanu, Catalin Daniel
    IEEE ACCESS, 2023, 11 : 120741 - 120753
  • [48] Optimized NOMA System Using Hybrid Coding and Deep Learning-Based Channel Estimation
    Dharshini, J. Sofia Priya
    Jordhana, P. Deepthi
    WIRELESS PERSONAL COMMUNICATIONS, 2024, 134 (03) : 1801 - 1826
  • [49] Deep Learning-Based Signal-To-Noise Ratio Estimation Using Constellation Diagrams
    Xie, Xiaojuan
    Peng, Shengliang
    Yang, Xi
    MOBILE INFORMATION SYSTEMS, 2020, 2020
  • [50] Detection of a Moving UAV Based on Deep Learning-Based Distance Estimation
    Lai, Ying-Chih
    Huang, Zong-Ying
    REMOTE SENSING, 2020, 12 (18)