DETECTION OF BIRD'S NEST ON TRANSMISSION LINES FROM AERIAL IMAGES BASED ON DEEP LEARNING MODEL

被引:5
作者
Zhang, Jie [1 ]
Qi, Qiye [1 ]
Zhang, Huanlong [1 ]
DU, Qifan [1 ]
Guo, Zhimin [2 ]
Tian, Yangyang [2 ]
机构
[1] Zhengzhou Univ Light Ind No, Coll Elect & Informat Engn, 5 Dongfeng Rd, Zhengzhou 450002, Peoples R China
[2] State Grid Henan Elect Power Res Inst, 85, Songshan Rd, Zhengzhou 450052, Peoples R China
来源
INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL | 2022年 / 18卷 / 06期
基金
中国国家自然科学基金;
关键词
Deep learning; Bird?s nest detection; AFF-YOLOv3; Attentional feature fusion; Intelligent inspection; FASTER;
D O I
10.24507/ijicic.18.06.1755
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The bird's nest on transmission lines poses a threat to the safe operation of transmission equipment and even affects the stability of the entire power system. Recent-ly, with the rapid development of 5G technology, unmanned aerial vehicle (UAV) tech-nology, and artificial intelligence technology, intelligent patrol transmission lines based on UAVs have become an inevitable trend in the development of power inspection. To ad-dress the problems of low recognition accuracy and low Recall of bird's nest detection in complex backgrounds by traditional methods, an improved YOLOv3 automatic detection model of bird's nest based on attentional feature fusion (AFF-YOLOv3) is proposed in this paper. The model first adds an attentional feature fusion network to the YOLOv3 top-down sampling process, calculates semantic weights based on the deep-level feature map, and then uses the semantic weights as a guide for selecting low-level features to ob-tain more valuable low-level features. Finally, the selected low-level feature maps and the high-level feature maps are concatenated to obtain robust features with both location in-formation and semantic information. The experimental results show that AFF-YOLOv3 achieves 87.58% average precision (AP) on the transmission line bird's nest dataset, and the model has stronger generalization ability and applicability compared with other detec-tors.
引用
收藏
页码:1755 / 1768
页数:14
相关论文
共 31 条
  • [1] [Anonymous], YOLOV5 CODE REPOSITO
  • [2] Bochkovskiy A, 2020, Arxiv, DOI arXiv:2004.10934
  • [3] Chen MY, 2020, PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), P2309, DOI [10.1109/itnec48623.2020.9084814, 10.1109/ITNEC48623.2020.9084814]
  • [4] Ding J., 2021, J XIAN U TECHNOLOGY, V37, P253
  • [5] Fast R-CNN
    Girshick, Ross
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1440 - 1448
  • [6] Rich feature hierarchies for accurate object detection and semantic segmentation
    Girshick, Ross
    Donahue, Jeff
    Darrell, Trevor
    Malik, Jitendra
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 580 - 587
  • [7] He KM, 2020, IEEE T PATTERN ANAL, V42, P386, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]
  • [8] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [9] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
  • [10] Hui Zhang, 2021, Journal of Physics: Conference Series, DOI 10.1088/1742-6596/2005/1/012235