Improved foreign object tracking algorithm in coal for belt conveyor gangue selection robot with YOLOv7 and DeepSORT

被引:7
|
作者
Yang, Dengjie [1 ,2 ]
Miao, Changyun [2 ,3 ]
Liu, Yi [1 ,2 ]
Wang, Yimin [1 ,2 ]
Zheng, Yao [2 ,3 ]
机构
[1] Tiangong Univ, Sch Mech Engn, Tianjin 300387, Peoples R China
[2] Tiangong Univ, Tianjin Key Lab Optoelect Detect Technol & Syst, Tianjin 300387, Peoples R China
[3] Tiangong Univ, Sch Elect & Informat Engn, Tianjin 300387, Peoples R China
关键词
Deeping learning; Mining; YOLOv7; DeepSORT; Multi-target tracking;
D O I
10.1016/j.measurement.2024.114180
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Given the indistinct dissimilarities between foreign matter and coal in terms of their physical characteristics, the utilization of machine vision detection technology for foreign matter tracking yields suboptimal accuracy and precision, thereby failing to satisfy the exigencies of coal mine production. In this paper, we proffer an enhanced you only look once version 7 (YOLOv7) and simple online and realtime tracking with a deep association metric (DeepSORT) algorithm for the purpose of tracking foreign entities in the coal domain. The YOLOv7 network model undergoes enhancements through the reduction of Backbone convolutional layers, the introduction of the context overlap and transition network (COTN) module, and the incorporation of a compact target detection layer. Concurrently, the DeepSORT tracking algorithm is refined by substituting the re-recognition network structure of DeepSORT with the machine translation interface (MTL) framework and replacing the DeepSORT foreign object tracking algorithm with the occlusion-aware spatial attention (OSA) module. Empirical findings substantiate the efficacy of the proposed algorithm, as it successfully achieves foreign object detection and tracking. Specifically, the algorithm attains a foreign object detection accuracy of 91.3% and a recall rate of 90.6%. In addition, it achieves a tracking accuracy of 76.1% for multiple object tracking accuracy (MOTA), and a precision rate of 80.6% for multiple object tracking precision (MOTP). Notably, in comparison to the DeepSORT tracking algorithm, the proposed algorithm exhibits a significant improvement of 6 percentage points in MOTA and 3.9 percentage points in MOTP.
引用
收藏
页数:12
相关论文
共 27 条
  • [21] Improved YOLOv7 Algorithm for Small Object Detection in Unmanned Aerial Vehicle Image Scenarios
    Li, Xinmin
    Wei, Yingkun
    Li, Jiahui
    Duan, Wenwen
    Zhang, Xiaoqiang
    Huang, Yi
    APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [22] Research on Coal and Gangue Recognition Based on the Improved YOLOv7-Tiny Target Detection Algorithm
    Sui, Yiping
    Zhang, Lei
    Sun, Zhipeng
    Yi, Weixun
    Wang, Meng
    SENSORS, 2024, 24 (02)
  • [23] An Attention Mechanism-Improved YOLOv7 Object Detection Algorithm for Hemp Duck Count Estimation
    Jiang, Kailin
    Xie, Tianyu
    Yan, Rui
    Wen, Xi
    Li, Danyang
    Jiang, Hongbo
    Jiang, Ning
    Feng, Ling
    Duan, Xuliang
    Wang, Jianjun
    AGRICULTURE-BASEL, 2022, 12 (10):
  • [24] Fruit Detection and Counting in Apple Orchards Based on Improved Yolov7 and Multi-Object Tracking Methods
    Hu, Jing
    Fan, Chuang
    Wang, Zhoupu
    Ruan, Jinglin
    Wu, Suyin
    SENSORS, 2023, 23 (13)
  • [25] Multi-scenario pear tree inflorescence detection based on improved YOLOv7 object detection algorithm
    Zhang, Zhen
    Lei, Xiaohui
    Huang, Kai
    Sun, Yuanhao
    Zeng, Jin
    Xyu, Tao
    Yuan, Quanchun
    Qi, Yannan
    Herbst, Andreas
    Lyu, Xiaolan
    FRONTIERS IN PLANT SCIENCE, 2024, 14
  • [26] Improved YOLOv7 Electric Work Safety Belt Hook Suspension State Recognition Algorithm Based on Decoupled Head
    Xie, Xiaona
    Chang, Zhengwei
    Lan, Zhongxiao
    Chen, Mingju
    Zhang, Xingyue
    ELECTRONICS, 2024, 13 (20)
  • [27] A New Efficient Multi-Object Detection and Size Calculation for Blended Tobacco Shreds Using an Improved YOLOv7 Network and LWC Algorithm
    Jia, Kunming
    Niu, Qunfeng
    Wang, Li
    Niu, Yang
    Ma, Wentao
    SENSORS, 2023, 23 (20)