Pigeon cleaning behavior detection algorithm based on light-weight network

被引:15
|
作者
Guo, Jianjun [1 ,2 ,3 ]
He, Guohuang [1 ,2 ,3 ]
Deng, Hao [1 ,2 ,3 ]
Fan, Wenting [1 ,2 ,3 ]
Xu, Longqin [1 ,2 ,3 ]
Cao, Liang [1 ,2 ,3 ]
Feng, Dachun [1 ,2 ,3 ]
Li, Jingbin [4 ]
Wu, Huilin [5 ]
Lv, Jiawei [1 ,2 ,3 ]
Liu, Shuangyin [1 ,2 ,3 ]
Hassan, Shahbaz Gul [1 ,2 ,3 ]
机构
[1] Zhongkai Univ Agr & Engn, Guangzhou Key Lab Agr Prod Qual & Safety Traceabil, Guangzhou 510225, Peoples R China
[2] Zhongkai Univ Agr & Engn, Coll Informat Sci & Technol, Guangzhou 510225, Peoples R China
[3] Zhongkai Univ Agr & Engn, Acad Intelligent Agr Engn Innovat, Guangzhou 510225, Peoples R China
[4] Shihezi Univ, Coll Mech & Elect Engn, Shihezi 832000, Peoples R China
[5] Natl S&T Innovat Ctr Modern Agr Ind Guangzhou Shor, Guangzhou, Peoples R China
基金
中国国家自然科学基金; 国家科技攻关计划;
关键词
Target detection; Pigeon cleaning behavior; Light-weight network; YOLO v4; Ghostnet;
D O I
10.1016/j.compag.2022.107032
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
The behavior of pigeons in the dovecote reflects their environmental comfort and health indicators. In order to solve the problems of time-consuming, labor-consuming, and subjectivity of traditional manual experience, an improved YOLO V4 light-weight target detection algorithm was proposed for row detection of breeding pigeons. Employ SPP, FPN, and PANet networks to strengthen the features retrieved from GhostNet as the backbone. To ensure accuracy, Ghostnet-yolo V4 reduced the model's number of parameters and raised its size to 43 MB. The light-weight feature extraction network GhostNet outperformed MobileNet V1 similar to V3 under the modified model. Faster RCNN, SSD, YOLO V4 and YOLO V3 compression rates were increased by 43.4 percent, 35.8 percent, 70.1 percent, and 69.1 percent, respectively. The improved algorithm has an accuracy of 97.06 percent and a recognition speed of 0.028 s per frame. The improved model can provide a theoretical foundation and technological reference for detecting breeding pigeon behavior in real-time in a dovecote.
引用
收藏
页数:14
相关论文
共 41 条
  • [1] LWFD: A Simple Light-Weight Network for Face Detection
    Liang, Huan
    Hu, Jiani
    Deng, Weihong
    BIOMETRIC RECOGNITION (CCBR 2019), 2019, 11818 : 207 - 215
  • [2] Fusing Deep Dilated Convolutions Network and Light-Weight Network for Object Detection
    Quan Y.
    Li Z.-X.
    Zhang C.-L.
    Ma H.-F.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2020, 48 (02): : 390 - 397
  • [3] Object Detection by Combining Deep Dilated Convolutions Network and Light-Weight Network
    Quan, Yu
    Li, Zhixin
    Zhang, Canlong
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2019, PT I, 2019, 11775 : 452 - 463
  • [4] An Efficient Light-weight Network for Fast Reconstruction on MR Images
    Zhen, Bowen
    Zheng, Yingjie
    Qiu, Bensheng
    CURRENT MEDICAL IMAGING, 2021, 17 (11) : 1374 - 1384
  • [5] Light-weight segmentation network based on SOLOv2 for weld seam feature extraction
    Zou, Yanbiao
    Zeng, Guohao
    MEASUREMENT, 2023, 208
  • [6] LightSeg: A Light-weight Network for Real-time Semantic Segmentation
    Ye, Run
    Li, Benhui
    Yan, Bin
    Li, Zhiyong
    THIRTEENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2021), 2021, 11878
  • [7] Light-Weight Semantic Segmentation Network for UAV Remote Sensing Images
    Liu, Siyu
    Cheng, Jian
    Liang, Leikun
    Bai, Haiwei
    Dang, Wanli
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 8287 - 8296
  • [8] IDPNet: a light-weight network and its variants for human pose estimation
    Huan Liu
    Jian Wu
    Rui He
    The Journal of Supercomputing, 2024, 80 : 6169 - 6191
  • [9] IDPNet: a light-weight network and its variants for human pose estimation
    Liu, Huan
    Wu, Jian
    He, Rui
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (05) : 6169 - 6191
  • [10] A light-weight object detection method based on knowledge distillation and model pruning for seam tracking system
    Zou, Yanbiao
    Liu, Chunyuan
    MEASUREMENT, 2023, 220