Lightweight Substation Equipment Defect Detection Algorithm for Small Targets

被引:2
作者
Wang, Jianqiang [1 ]
Sun, Yiwei [2 ]
Lin, Ying [2 ]
Zhang, Ke [1 ,3 ]
机构
[1] North China Elect Power Univ, Dept Elect & Commun Engn, Baoding 071003, Peoples R China
[2] State Grid Shandong Elect Power Res Inst, Jinan 250003, Peoples R China
[3] North China Elect Power Univ, Hebei Key Lab Power Internet Things Technol, Baoding 071003, Peoples R China
基金
中国国家自然科学基金;
关键词
defect detection; deep learning; substation equipment; small object detection; lightweight; YOLOv8;
D O I
10.3390/s24185914
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Substation equipment defect detection has always played an important role in equipment operation and maintenance. However, the task scenarios of substation equipment defect detection are complex and different. Recent studies have revealed issues such as a significant missed detection rate for small-sized targets and diminished detection precision. At the same time, the current mainstream detection algorithms are highly complex, which is not conducive to deployment on resource-constrained devices. In view of the above problems, a small target and lightweight substation main scene equipment defect detection algorithm is proposed: Efficient Attentional Lightweight-YOLO (EAL-YOLO), which detection accuracy exceeds the current mainstream model, and the number of parameters and floating point operations (FLOPs) are also advantageous. Firstly, the EfficientFormerV2 is used to optimize the model backbone, and the Large Separable Kernel Attention (LSKA) mechanism has been incorporated into the Spatial Pyramid Pooling Fast (SPPF) to enhance the model's feature extraction capabilities; secondly, a small target neck network Attentional scale Sequence Fusion P2-Neck (ASF2-Neck) is proposed to enhance the model's ability to detect small target defects; finally, in order to facilitate deployment on resource-constrained devices, a lightweight shared convolution detection head module Lightweight Shared Convolutional Head (LSCHead) is proposed. Experiments show that compared with YOLOv8n, EAL-YOLO has improved its accuracy by 2.93 percentage points, and the mAP50 of 12 types of typical equipment defects has reached 92.26%. Concurrently, the quantity of FLOPs and parameters has diminished by 46.5% and 61.17% respectively, in comparison with YOLOv8s, meeting the needs of substation defect detection.
引用
收藏
页数:16
相关论文
共 32 条
  • [1] RepVGG: Making VGG-style ConvNets Great Again
    Ding, Xiaohan
    Zhang, Xiangyu
    Ma, Ningning
    Han, Jungong
    Ding, Guiguang
    Sun, Jian
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13728 - 13737
  • [2] CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows
    Dong, Xiaoyi
    Bao, Jianmin
    Chen, Dongdong
    Zhang, Weiming
    Yu, Nenghai
    Yuan, Lu
    Chen, Dong
    Guo, Baining
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12114 - 12124
  • [3] TOOD: Task-aligned One-stage Object Detection
    Feng, Chengjian
    Zhong, Yujie
    Gao, Yu
    Scott, Matthew R.
    Huang, Weilin
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3490 - 3499
  • [4] A multi-source domain information fusion network for rotating machinery fault diagnosis under variable operating conditions
    Gao, Tianyu
    Yang, Jingli
    Tang, Qing
    [J]. INFORMATION FUSION, 2024, 106
  • [5] A Novel Fault Detection Model Based on Vector Quantization Sparse Autoencoder for Nonlinear Complex Systems
    Gao, Tianyu
    Yang, Jingli
    Jiang, Shouda
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (03) : 2693 - 2704
  • [6] Searching for MobileNetV3
    Howard, Andrew
    Sandler, Mark
    Chu, Grace
    Chen, Liang-Chieh
    Chen, Bo
    Tan, Mingxing
    Wang, Weijun
    Zhu, Yukun
    Pang, Ruoming
    Vasudevan, Vijay
    Le, Quoc V.
    Adam, Hartwig
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 1314 - 1324
  • [7] Kang M, 2024, Arxiv, DOI arXiv:2312.06458
  • [8] Large Separable Kernel Attention: Rethinking the Large Kernel Attention design in CNN
    Lau, Kin Wai
    Po, Lai-Man
    Rehman, Yasar Abbas Ur
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 236
  • [9] Draelos RL, 2021, Arxiv, DOI [arXiv:2011.08891, 10.48550/arXiv.2011.08891, DOI 10.48550/ARXIV.2011.08891]
  • [10] Li CY, 2022, Arxiv, DOI arXiv:2209.02976