Counting piglet suckling events using deep learning-based action density estimation

被引:4
|
作者
Gan, Haiming [1 ,2 ,3 ,4 ]
Guo, Jingfeng [1 ]
Liu, Kai [3 ]
Deng, Xinru [1 ]
Zhou, Hui [1 ]
Luo, Dehuan [1 ]
Chen, Shiyun [1 ]
Norton, Tomas [2 ]
Xue, Yueju [1 ,4 ,5 ]
机构
[1] South China Agr Univ, Coll Elect Engn, Guangzhou 510642, Guangdong, Peoples R China
[2] Katholieke Univ Leuven KU LEUVEN, Fac Biosci Engn, Kasteelpk Arenberg 30, B-3001 Leuven, Belgium
[3] City Univ Hong Kong, Jockey Club Coll Vet Med & Life Sci, Dept Infect Dis & Publ Hlth, Hong Kong, Peoples R China
[4] Guangdong Lab Lingnan Modern Agr, Guangzhou 510642, Guangdong, Peoples R China
[5] South China Agr Univ, Coll Elect Engn, Wushan 483, Guangzhou, Guangdong, Peoples R China
基金
中国博士后科学基金;
关键词
Piglet suckling behaviour; Action density; Two -stream network; Precision livestock farming; AUTOMATED DETECTION; BEHAVIOR; SOWS; RECOGNITION; PIGS;
D O I
10.1016/j.compag.2023.107877
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Analysis of piglet suckling behaviour is important for the evaluation of piglet nutrient ingestion, health, welfare, and affinity with the sow. In this study, an action density estimation network (ADEN) was proposed for counting the events of piglet suckling followed by automated analysis of suckling behaviour. ADEN is a two-stream network primarily composed of 1) a network stream that processes video images with a higher frame rate (faster stream) and 2) a network stream that processes video images with a lower frame rate (slower stream). Each stream consists of a ResNet-50 with five convolutional stages. A multi-stage attention connection (MSAC), composed of four Spatial-Temporal-Channel (STC) multi-attention structures, is proposed to bridge Faster Stream and Slower Stream and capture discriminative features. The output attention features from each faster stream stage are laterally fused into the corresponding slower stream stage in a concatenating manner. Following this, the features from the last convolutional stages in the Slow stream and Fast stream are fused using concatenation and are then decoded by using three convolutional layers. The last convolutional layer outputs a heatmap on the action density of piglet suckling behaviour. Finally, the number of suckling events is predicted by integrating all the pixel values in the heatmap. Experimental and comparative tests were conducted to validate the effectiveness of the proposed ADEN with a training dataset and a test dataset from 14 pig pens. The 507 video clips (126,750 images for 7 h) from the 1-9th pens were selected as training datasets. The 143 video clips (35,750 images for 2 h) from the 10-13th pens were selected as short-term test datasets. One untrimmed video (162,000 images for 9 h) from the 14th pen was used to ultimately evaluate the action density estimation performance of the ADEN. ADEN was compared with seven approaches and its superiority was demonstrated with an r = 0.938, an RMSE = 1.080, and a MAE = 0.967 in short video clips and r = 0.982, MAE = 0.161, and RMSE = 0.563 in the untrimmed long video. The ADEN proved it feasible to predict the number of suckling events by using action density estimation.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Deep Learning-based Depth Map Estimation: A Review
    Jan, Abdullah
    Khan, Safran
    Seo, Suyoung
    KOREAN JOURNAL OF REMOTE SENSING, 2023, 39 (01) : 1 - 21
  • [32] Review of Deep Learning-Based Human Pose Estimation
    Lu Jian
    Yang Tengfei
    Zhao Bo
    Wang Hangying
    Luo Maoxin
    Zhou Yanran
    Li Zhe
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (24)
  • [33] On Deep Learning-Based Indoor Positioning and Uncertainty Estimation
    Chen, Szu-Wei
    Chiang, Ting-Hui
    Tseng, Yu-Chee
    Chen, Yan-Ann
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 207 - 212
  • [34] Rapid counting of coliforms and Escherichia coli by deep learning-based classifier
    Wakabayashi, Rina
    Aoyanagi, Atsuko
    Tominaga, Tatsuya
    JOURNAL OF FOOD SAFETY, 2024, 44 (04)
  • [35] Deep Learning-based Estimation for Multitarget Radar Detection
    Delamou, Mamady
    Bazzi, Ahmad
    Chafii, Marwa
    Amhoud, El Mehdi
    2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [36] DEEP LEARNING-BASED OBSTACLE DETECTION AND DEPTH ESTIMATION
    Hsieh, Yi-Yu
    Lin, Wei-Yu
    Li, Dong-Lin
    Chuang, Jen-Hui
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 1635 - 1639
  • [37] Deep Learning-based Human Pose Estimation: A Survey
    Zheng, Ce
    Wu, Wenhan
    Chen, Chen
    Yang, Taojiannan
    Zhu, Sijie
    Shen, Ju
    Kehtarnavaz, Nasser
    Shah, Mubarak
    ACM COMPUTING SURVEYS, 2024, 56 (01)
  • [38] A survey of deep learning-based object detection methods in crop counting
    Huang, Yuning
    Qian, Yurong
    Wei, Hongyang
    Lu, Yiguo
    Ling, Bowen
    Qin, Yugang
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2023, 215
  • [39] A Novel Deep Learning-based Approach to High Accuracy Breast Density Estimation in Digital Mammography
    Ahn, Chul Kyun
    Heo, Changyong
    Jin, Heongmin
    Kim, Jong Hyo
    MEDICAL IMAGING 2017: COMPUTER-AIDED DIAGNOSIS, 2017, 10134
  • [40] A novel deep learning algorithm for Phaeocystis counting and density estimation based on feature reconstruction and multispectral generator
    Si, Yifan
    Li, Shuo
    He, Sailing
    NEUROCOMPUTING, 2025, 611