Automated Individual Cattle Identification Using Video Data: A Unified Deep Learning Architecture Approach

被引:9
作者
Qiao, Yongliang [1 ]
Clark, Cameron [2 ]
Lomax, Sabrina [2 ]
Kong, He [1 ]
Su, Daobilige [3 ]
Sukkarieh, Salah [1 ]
机构
[1] Univ Sydney, Fac Engn, Australian Ctr Field Robot, Sydney, NSW, Australia
[2] Univ Sydney, Fac Sci, Sch Life & Environm Sci, Livestock Prod & Welf Grp, Sydney, NSW, Australia
[3] China Agr Univ, Coll Engn, Beijing, Peoples R China
来源
FRONTIERS IN ANIMAL SCIENCE | 2021年 / 2卷
关键词
cattle identification; deep learning; BiLSTM; self-attention; precision livestock farming; FULLY CONVOLUTIONAL NETWORKS; ACTION RECOGNITION; ATTENTION; LSTM; MODEL; COWS;
D O I
10.3389/fanim.2021.759147
中图分类号
S8 [畜牧、 动物医学、狩猎、蚕、蜂];
学科分类号
0905 ;
摘要
Individual cattle identification is a prerequisite and foundation for precision livestock farming. Existing methods for cattle identification require radio frequency or visual ear tags, all of which are prone to loss or damage. Here, we propose and implement a new unified deep learning approach to cattle identification using video analysis. The proposed deep learning framework is composed of a Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) with a self-attention mechanism. More specifically, the Inception-V3 CNN was used to extract features from a cattle video dataset taken in a feedlot with rear-view. Extracted features were then fed to a BiLSTM layer to capture spatio-temporal information. Then, self-attention was employed to provide a different focus on the features captured by BiLSTM for the final step of cattle identification. We used a total of 363 rear-view videos from 50 cattle at three different times with an interval of 1 month between data collection periods. The proposed method achieved 93.3% identification accuracy using a 30-frame video length, which outperformed current state-of-the-art methods (Inception-V3, MLP, SimpleRNN, LSTM, and BiLSTM). Furthermore, two different attention schemes, namely, additive and multiplicative attention mechanisms were compared. Our results show that the additive attention mechanism achieved 93.3% accuracy and 91.0% recall, greater than multiplicative attention mechanism with 90.7% accuracy and 87.0% recall. Video length also impacted accuracy, with video sequence length up to 30-frames enhancing identification performance. Overall, our approach can capture key spatio-temporal features to improve cattle identification accuracy, enabling automated cattle identification for precision livestock farming.
引用
收藏
页数:14
相关论文
共 59 条
[51]   Self-Attention-Based BiLSTM Model for Short Text Fine-Grained Sentiment Classification [J].
Xie, Jun ;
Chen, Bo ;
Gu, Xinglong ;
Liang, Fengmei ;
Xu, Xinying .
IEEE ACCESS, 2019, 7 :180558-180570
[52]   Learning Multimodal Attention LSTM Networks for Video Captioning [J].
Xu, Jun ;
Yao, Ting ;
Zhang, Yongdong ;
Mei, Tao .
PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, :537-545
[53]   One-Shot Learning-Based Animal Video Segmentation [J].
Xue, Tengfei ;
Qiao, Yongliang ;
Kong, He ;
Su, Daobilige ;
Pan, Shirui ;
Rafique, Khalid ;
Sukkarieh, Salah .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (06) :3799-3807
[54]  
Yang BS, 2019, AAAI CONF ARTIF INTE, P387
[55]   Enhancing Attention-Based LSTM With Position Context for Aspect-Level Sentiment Classification [J].
Zeng, Jiangfeng ;
Ma, Xiao ;
Zhou, Ke .
IEEE ACCESS, 2019, 7 :20462-20471
[56]   Multi-Gram CNN-Based Self-Attention Model for Relation Classification [J].
Zhang, Chunyun ;
Cui, Chaoran ;
Gao, Sheng ;
Nie, Xiushan ;
Xu, Weiran ;
Yang, Lu ;
Xi, Xiaoming ;
Yin, Yilong .
IEEE ACCESS, 2019, 7 :5343-5357
[57]   A Cascaded R-CNN With Multiscale Attention and Imbalanced Samples for Traffic Sign Detection [J].
Zhang, Jianming ;
Xie, Zhipeng ;
Sun, Juan ;
Zou, Xin ;
Wang, Jin .
IEEE ACCESS, 2020, 8 :29742-29754
[58]   Individual identification of Holstein dairy cows based on detecting and matching feature points in body images [J].
Zhao, Kaixuan ;
Jin, Xin ;
Ji, Jiangtao ;
Wang, Jun ;
Ma, Hao ;
Zhu, Xuefeng .
BIOSYSTEMS ENGINEERING, 2019, 181 :128-139
[59]  
Zhao KaiXuan Zhao KaiXuan, 2015, Transactions of the Chinese Society of Agricultural Engineering, V31, P181