Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment

被引:8
作者
Ma, Tingchen [1 ,2 ]
Jiang, Guolai [1 ]
Ou, Yongsheng [3 ]
Xu, Sheng [4 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[2] Univ Chinese Acad Sci, shenzhen Coll Adv Technol, Shenzhen 518055, Peoples R China
[3] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[4] Chinese Acad Sci, Shenzhen Inst Adv Technol, Guangdong Prov Key Lab Robot & Intelligent Syst, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
mobile robots; navigation; lidar SLAM; multi-object tracking; dynamic object detection; MONOCULAR SLAM; ROBUST; ASSIGNMENT;
D O I
10.1017/S0263574723001868
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Simultaneous localization and mapping systems based on rigid scene assumptions cannot achieve reliable positioning and mapping in a complex environment with many moving objects. To solve this problem, this paper proposes a novel dynamic multi-object lidar odometry (MLO) system based on semantic object recognition technology. The proposed system enables the reliable localization of robots and semantic objects and the generation of long-term static maps in complex dynamic scenes. For ego-motion estimation, the proposed system extracts environmental features that take into account both semantic and geometric consistency constraints. Then, the filtered features can be robust to the semantic movable and unknown dynamic objects. In addition, we propose a new least-squares estimator that uses geometric object points and semantic box planes to realize the multi-object tracking (SGF-MOT) task robustly and precisely. In the mapping module, we implement dynamic semantic object detection using the absolute trajectory tracking list. By using static semantic objects and environmental features, the system eliminates accumulated localization errors and produces a purely static map. Experiments on the public KITTI dataset show that the proposed MLO system provides more accurate and robust object tracking performance and better real-time localization accuracy in complex scenes compared to existing technologies.
引用
收藏
页码:891 / 910
页数:20
相关论文
共 50 条
[21]   Multi-object tracking by multi-feature fusion to associate all detected boxes [J].
Bilakeri, Shavantrevva ;
Karunakar, A. K. .
COGENT ENGINEERING, 2022, 9 (01)
[22]   LiDAR-based simultaneous multi-object tracking and static mapping in nearshore scenario [J].
Yao, Zhiting ;
Chen, Xiyuan ;
Xu, Ninghui ;
Gao, Ning ;
Ge, Mingming .
OCEAN ENGINEERING, 2023, 272
[23]   MULTI-OBJECT TRACKING AS ATTENTION MECHANISM [J].
Fukui, Hiroshi ;
Miyagawa, Taiki ;
Morishita, Yusuke .
2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, :505-509
[24]   Interacting Tracklets for Multi-Object Tracking [J].
Lan, Long ;
Wang, Xinchao ;
Zhang, Shiliang ;
Tao, Dacheng ;
Gao, Wen ;
Huang, Thomas S. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (09) :4585-4597
[25]   Multi-object tracking for horse racing [J].
Ng, Wing W. Y. ;
Liu, Xuyu ;
Yan, Xuli ;
Tian, Xing ;
Zhong, Cankun ;
Kwong, Sam .
INFORMATION SCIENCES, 2023, 638
[26]   An Object Point Set Inductive Tracker for Multi-Object Tracking and Segmentation [J].
Gao, Yan ;
Xu, Haojun ;
Zheng, Yu ;
Li, Jie ;
Gao, Xinbo .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :6083-6096
[27]   FishTrack: Multi-object tracking method for fish using spatiotemporal information fusion [J].
Liu, Yiran ;
Li, Beibei ;
Zhou, Xinhui ;
Li, Daoliang ;
Duan, Qingling .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
[28]   Multi-object tracking via MHT with multiple information fusion in surveillance video [J].
Long Ying ;
Tianzhu Zhang ;
Changsheng Xu .
Multimedia Systems, 2015, 21 :313-326
[29]   Multi-Object Tracking Algorithm Based on CNN-Transformer Feature Fusion [J].
Zhang, Yingjun ;
Bai, Xiaohui ;
Xie, Binhong .
Computer Engineering and Applications, 2024, 60 (02) :180-190
[30]   Multiple Feature Fusion in the Dempster-Shafer Framework for Multi-Object Tracking [J].
Riahi, Dorra ;
Bilodeau, Guillaume-Alexandre .
2014 CANADIAN CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2014, :313-320