A Lightweight and Efficient Multi-Type Defect Detection Method for Transmission Lines Based on DCP-YOLOv8

被引:4
作者
Wang, Yong [1 ]
Zhang, Linghao [2 ]
Xiong, Xingzhong [1 ]
Kuang, Junwei [2 ]
Xiang, Siyu [2 ]
机构
[1] Sichuan Univ Sci & Engn, Sch Automat & Informat Engn, Yibin 644000, Peoples R China
[2] State Grid Corp Sichuan Prov, Res Inst Elect Power Sci, Chengdu 610095, Peoples R China
关键词
transmission lines; defect detection; machine vision; YOLOv8; lightweight; deformable convolution;
D O I
10.3390/s24144491
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Currently, the intelligent defect detection of massive grid transmission line inspection pictures using AI image recognition technology is an efficient and popular method. Usually, there are two technical routes for the construction of defect detection algorithm models: one is to use a lightweight network, which improves the efficiency, but it can generally only target a few types of defects and may reduce the detection accuracy; the other is to use a complex network model, which improves the accuracy, and can identify multiple types of defects at the same time, but it has a large computational volume and low efficiency. To maintain the model's high detection accuracy as well as its lightweight structure, this paper proposes a lightweight and efficient multi type defect detection method for transmission lines based on DCP-YOLOv8. The method employs deformable convolution (C2f_DCNv3) to enhance the defect feature extraction capability, and designs a re-parameterized cross phase feature fusion structure (RCSP) to optimize and fuse high-level semantic features with low level spatial features, thus improving the capability of the model to recognize defects at different scales while significantly reducing the model parameters; additionally, it combines the dynamic detection head and deformable convolutional v3's detection head (DCNv3-Dyhead) to enhance the feature expression capability and the utilization of contextual information to further improve the detection accuracy. Experimental results show that on a dataset containing 20 real transmission line defects, the method increases the average accuracy (mAP@0.5) to 72.2%, an increase of 4.3%, compared with the lightest baseline YOLOv8n model; the number of model parameters is only 2.8 M, a reduction of 9.15%, and the number of processed frames per second (FPS) reaches 103, which meets the real time detection demand. In the scenario of multi type defect detection, it effectively balances detection accuracy and performance with quantitative generalizability.
引用
收藏
页数:20
相关论文
共 35 条
  • [1] Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, DOI 10.48550/ARXIV.2004.10934]
  • [2] Dynamic Head: Unifying Object Detection Heads with Attentions
    Dai, Xiyang
    Chen, Yinpeng
    Xiao, Bin
    Chen, Dongdong
    Liu, Mengchen
    Yuan, Lu
    Zhang, Lei
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7369 - 7378
  • [3] Howard AG, 2017, Arxiv, DOI [arXiv:1704.04861, DOI 10.48550/ARXIV.1704.04861]
  • [4] NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection
    Ghiasi, Golnaz
    Lin, Tsung-Yi
    Le, Quoc V.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7029 - 7038
  • [5] Fast R-CNN
    Girshick, Ross
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1440 - 1448
  • [6] Rich feature hierarchies for accurate object detection and semantic segmentation
    Girshick, Ross
    Donahue, Jeff
    Darrell, Trevor
    Malik, Jitendra
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 580 - 587
  • [7] A Robust Faster R-CNN Model with Feature Enhancement for Rust Detection of Transmission Line Fitting
    Guo, Zhimin
    Tian, Yangyang
    Mao, Wandeng
    [J]. SENSORS, 2022, 22 (20)
  • [8] He KM, 2020, IEEE T PATTERN ANAL, V42, P386, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]
  • [9] Jiang YQ, 2022, Arxiv, DOI arXiv:2202.04256
  • [10] Li CY, 2022, Arxiv, DOI [arXiv:2209.02976, DOI 10.48550/ARXIV.2209.02976]