YOLO-O2E: A Variant YOLO Model for Anomalous Rail Fastening Detection

被引:0
作者
Chu, Zhuhong [1 ]
Zhang, Jianxun [1 ]
Wang, Chengdong [2 ]
Yang, Changhui [3 ]
机构
[1] Chongqing Univ Technol, Dept Comp Sci & Engn, Chongqing 400054, Peoples R China
[2] Xiamen Univ, Inst Flexible Elect Future Technol, Xiamen 361000, Peoples R China
[3] Chongqing Univ Technol, Coll Mech Engn, Chongqing 400054, Peoples R China
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2024年 / 80卷 / 01期
基金
中国国家自然科学基金;
关键词
Rail fastening detection; deep learning; anomalous rail fastening; variant YOLO; feature reinforcement;
D O I
10.32604/cmc.2024.052269
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Rail fasteners are a crucial component of the railway transportation safety system. These fasteners, distinguished by their high length-to-width ratio, frequently encounter elevated failure rates, necessitating manual inspection and maintenance. Manual inspection not only consumes time but also poses the risk of potential oversights. With the advancement of deep learning technology in rail fasteners, challenges such as the complex background of rail fasteners and the similarity in their states are addressed. We have proposed an efficient and high-precision rail fastener detection algorithm, named YOLO-O2E (you only look once-O2E). Firstly, we propose the EFOV (Enhanced Field of View) structure, aiming to adjust the effective receptive field size of convolutional kernels to enhance insensitivity to small spatial variations. Additionally, The OD_MP (ODConv and MP_2) and EMA (Efficient Multi-Scale Attention) modules mentioned in the algorithm can acquire a wider spectrum of contextual information, enhancing the model's ability to recognize and locate objectives. Additionally, we collected and prepared the GKA dataset, sourced from real train tracks. Through testing on the GKA dataset and the publicly available NUE-DET dataset, our method outperforms general-purpose object detection algorithms. On the GKA dataset, our model achieved a mAP@0.5 value of 97.6% and a mAP@0.5:0.95 value of 83.9%, demonstrating excellent inference speed. YOLO-O2E is an algorithm for detecting anomalies in railway fasteners that is applicable in practical industrial settings, addressing the industry gap in rail fastener detection.
引用
收藏
页码:1143 / 1161
页数:19
相关论文
共 40 条
  • [1] [Anonymous], 2007, J. Railw. Eng. Soc
  • [2] Defect classification based on deep features for railway tracks in sustainable transportation
    Aydin, Ilhan
    Akin, Erhan
    Karakose, Mehmet
    [J]. APPLIED SOFT COMPUTING, 2021, 111
  • [3] Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, 10.48550/arXiv.2004.10934]
  • [4] Cascade R-CNN: Delving into High Quality Object Detection
    Cai, Zhaowei
    Vasconcelos, Nuno
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6154 - 6162
  • [5] Optimized railway track health monitoring system based on dynamic differential evolution algorithm
    Chellaswamy, C.
    Krishnasamy, M.
    Balaji, L.
    Dhanalakshmi, A.
    Ramesh, R.
    [J]. MEASUREMENT, 2020, 152
  • [6] Application research of image recognition technology based on improved SVM in abnormal monitoring of rail fasteners
    Fan, Xianzheng
    Jiao, Xiongfeng
    Shuai, Mingming
    Qin, Yi
    Chen, Jun
    [J]. JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING, 2023, 23 (03) : 1307 - 1319
  • [7] Ge Z, 2021, Arxiv, DOI [arXiv:2107.08430, DOI 10.48550/ARXIV.2107.08430]
  • [8] Deep Multitask Learning for Railway Track Inspection
    Gibert, Xavier
    Patel, VishalM.
    Chellappa, Rama
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2017, 18 (01) : 153 - 164
  • [9] Sequential Score Adaptation with Extreme Value Theory for Robust Railway Track Inspection
    Gibert, Xavier
    Patel, Vishal M.
    Chellappa, Rama
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP (ICCVW), 2015, : 131 - 138
  • [10] Densely Connected Convolutional Networks
    Huang, Gao
    Liu, Zhuang
    van der Maaten, Laurens
    Weinberger, Kilian Q.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2261 - 2269