Automatic Shrimp Fry Counting Method Using Multi-Scale Attention Fusion

被引:1
作者
Peng, Xiaohong [1 ]
Zhou, Tianyu [1 ]
Zhang, Ying [1 ,2 ]
Zhao, Xiaopeng [1 ]
机构
[1] Guangdong Ocean Univ, Fac Math & Comp Sci, Zhanjiang 524088, Peoples R China
[2] Zhanjiang Bay Lab, Southern Marine Sci & Engn Guangdong Lab, Zhanjiang 524000, Peoples R China
关键词
smart aquaculture; deep learning; shrimp fry counting; SFCNet; multi-scale attention fusion;
D O I
10.3390/s24092916
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Shrimp fry counting is an important task for biomass estimation in aquaculture. Accurate counting of the number of shrimp fry in tanks can not only assess the production of mature shrimp but also assess the density of shrimp fry in the tanks, which is very helpful for the subsequent growth status, transportation management, and yield assessment. However, traditional manual counting methods are often inefficient and prone to counting errors; a more efficient and accurate method for shrimp fry counting is urgently needed. In this paper, we first collected and labeled the images of shrimp fry in breeding tanks according to the constructed experimental environment and generated corresponding density maps using the Gaussian kernel function. Then, we proposed a multi-scale attention fusion-based shrimp fry counting network called the SFCNet. Experiments showed that our proposed SFCNet model reached the optimal performance in terms of shrimp fry counting compared to CNN-based baseline counting models, with MAEs and RMSEs of 3.96 and 4.682, respectively. This approach was able to effectively calculate the number of shrimp fry and provided a better solution for accurately calculating the number of shrimp fry.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Multi-Scale Attention-Enhanced Deep Learning Model for Ionogram Automatic Scaling
    Guo, Li
    Xiong, Jiarong
    RADIO SCIENCE, 2023, 58 (03)
  • [32] CERVICAL CELL CLASSIFICATION USING MULTI-SCALE FEATURE FUSION AND CHANNEL-WISE CROSS-ATTENTION
    Shi, Jun
    Zhu, Xinyu
    Zhang, Yuan
    Zheng, Yushan
    Jiang, Zhiguo
    Zheng, Liping
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [33] Medical image segmentation method combining multi-scale and multi-head attention
    Wang W.-L.
    Wang T.-J.
    Chen J.-C.
    You W.-B.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2022, 56 (09): : 1796 - 1805
  • [34] Multi-scale high and low feature fusion attention network for intestinal image classification
    Sheng Li
    Beibei Zhu
    Xinran Guo
    Shufang Ye
    Jietong Ye
    Yongwei Zhuang
    Xiongxiong He
    Signal, Image and Video Processing, 2023, 17 : 2877 - 2886
  • [35] FMA-Net: Fusion of Multi-Scale Attention for Grading Cervical Precancerous Lesions
    Duan, Zhuoran
    Xu, Chao
    Li, Zhengping
    Feng, Bo
    Nie, Chao
    MATHEMATICS, 2024, 12 (07)
  • [36] MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms
    Xiong, Fengguang
    Zhang, Zhiqiang
    Kong, Yu
    Shen, Chaofan
    Hu, Mingyue
    Kuang, Liqun
    Han, Xie
    REMOTE SENSING, 2024, 16 (22)
  • [37] Deep Multi-scale Feature Fusion Convolutional Neural Network for Automatic Epilepsy Detection Using EEG Signals
    Qin, Hongshuai
    Deng, Bin
    Wang, Jiang
    Yi, Guosheng
    Wang, Ruofan
    Zhang, Zhen
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 7061 - 7066
  • [38] Automatic shrimp counting method using local images and lightweight YOLOv4
    Zhang, Lu
    Zhou, Xinhui
    Li, Beibei
    Zhang, Hongxu
    Duan, Qingling
    BIOSYSTEMS ENGINEERING, 2022, 220 : 39 - 54
  • [39] Wafer defect recognition method based on multi-scale feature fusion
    Chen, Yu
    Zhao, Meng
    Xu, Zhenyu
    Li, Kaiyue
    Ji, Jing
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [40] Multi-scale high and low feature fusion attention network for intestinal image classification
    Li, Sheng
    Zhu, Beibei
    Guo, Xinran
    Ye, Shufang
    Ye, Jietong
    Zhuang, Yongwei
    He, Xiongxiong
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (06) : 2877 - 2886