3D Vehicle Detection and Tracking Integration Algorithm Based on Task Collaboration

被引:0
作者
Cheng X. [1 ,3 ]
Zhou J.-M. [2 ]
Liu P.-Y. [2 ]
Wang H.-F. [1 ]
Xu Z.-G. [1 ]
Zhao X.-M. [1 ]
机构
[1] School of Information Engineering, Chang'an University, Shaanxi, Xi'an
[2] School of Electronics and Control Engineering, Chang'an University, Shaanxi, Xi'an
[3] Traffic Management Research Institute of the Ministry of Public Security, Jiangsu, Wuxi
来源
Zhongguo Gonglu Xuebao/China Journal of Highway and Transport | 2023年 / 36卷 / 09期
基金
中国国家自然科学基金;
关键词
lidar point cloud; task integration; traffic engineering; vehicle object detection; vehicle object tracking;
D O I
10.19721/j.cnki.1001-7372.2023.09.022
中图分类号
学科分类号
摘要
For the conventional object detection and tracking algorithm, the detector and the tracker work in a pipelined way. The missed detection of the target detection module will reduce the performance of the target tracking module. To solve this problem, a joint detection and tracking algorithm can be constructed to make the two tasks mutually promote each other, so as to further improve the detection and tracking accuracy. A joint task framework (3D Tracktor+ +) for vehicle object detection and tracking is proposed in this paper. The prior candidate region with identity information is generated by object detection results of the previous frame, the detector is guided to carry out object box regression in the region with a high probability of object occurrence, and further the track number is directly output. In order to avoid missed detection of new object entering the scene and make up for the deviation of prior candidate regions, a supplementary module of candidate regions is added. The prior candidate region and supplementary candidate region arc integrated, and vehicle object positions and track numbers arc output through Rol pooling, target regression, track inspection and other steps. The experimental results on KITTI data set show that compared with the single task object detection method (VoxelRCNN), the average detection precision (AP3D) of the proposed joint task method is higher, and the AP3D is improved by 2. 75% for the medium difficulty samples. Compared with the basic method (AB3DMOT) for 3D multi-object tracking, MOTP and AMOTP have increased by 3. 59% and 0. 77%, respectively. Compared with the conventional "detect before tracking" algorithm, the proposed algorithm is reasonable, and improves the detection and tracking accuracy effectively. © 2023 Xi'an Highway University. All rights reserved.
引用
收藏
页码:288 / 301
页数:13
相关论文
共 33 条
  • [1] WENG X S., WANG J R, HELD D, Et al., 3D multi-object tracking: A baseline and new evaluation metrics [C], IEEE. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10359-10366, (2021)
  • [2] CHIU H K, LI J, AMBRUS R, Et al., Probabilistic 3D multi-modal, multi-object tracking for autonomous driving [C], IEEE. 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 14227-14233, (2021)
  • [3] WANG Hal, LI Yang, CAI Ying-feng, Et al., 3D real-time vehicle tracking based on lidar [J], Automotive Engineering, 43, 7, pp. 1013-1021, (2021)
  • [4] CHENG X, ZHOU J M, LIU P Y, Et al., 3D vehicle object tracking algorithm based on hounding box similarity measurement [J], IEEE Transactions on Intelligent Transportation Systems, 99, pp. 1-11, (2023)
  • [5] ZHANG W W, ZHOU H, SUN S Y, Et al., Robust multi-modality multi-object tracking [C], IEEE. 2019 IEEE/ CVF International Conference on Computer Vision (ICCV), pp. 2365-2374, (2020)
  • [6] SHENOI A, PATEL M, GWAK J, Et al., JRMOT
  • [7] A realtime 3D multi-object tracker and a new large-scale dataset, IEEE. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10335-10342, (2021)
  • [8] ZHAI Guang-yao, LiDAR-hased 3D object tracking algorithms [D], (2021)
  • [9] CAI J R, XU M Z, LI W, Et al., MeMOT: Multi-object tracking with memory [C], IEEE. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8080-8090, (2022)
  • [10] ZHOU X Y, YIN T W, KOLTUN V, Et al., Global tracking transformers, IEEE. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8761-8770, (2022)