CV-MOS: A Cross-View Model for Motion Segmentation

被引:0
作者
Tang, Xiaoyu [1 ,2 ]
Chen, Zeyu [1 ,2 ]
Cheng, Jintao [1 ,2 ]
Chen, Xieyuanli [3 ]
Wu, Jin [4 ]
Xue, Bohuan [5 ]
机构
[1] South China Normal Univ, Fac Engn, Sch Elect & Informat Engn, Foshan 528225, Guangdong, Peoples R China
[2] South China Normal Univ, Xingzhi Coll, Guangzhou 510000, Guangdong, Peoples R China
[3] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha 410073, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[5] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud compression; Semantics; Three-dimensional displays; Feature extraction; Laser radar; Periodic structures; Motion segmentation; Autonomous driving; cross view; LiDAR motion segmentation;
D O I
10.1109/TIM.2024.3458036
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In autonomous driving, accurately distinguishing between static and moving objects is crucial for the autonomous driving system. When performing the motion object segmentation (MOS) task, effectively leveraging motion information from objects becomes a primary challenge in improving the recognition of moving objects. Previous methods either utilized range view (RV) or bird's eye view (BEV) residual maps to capture motion information. Unlike traditional approaches, we propose combining RV and BEV residual maps to exploit a greater potential of motion information jointly. Thus, we introduce CV-MOS, a cross-view model for moving object segmentation. Novelty, we decouple spatial-temporal information by capturing the motion from BEV and RV residual maps and generating semantic features from range images, which are used as moving object guidance for the motion branch. Our direct and unique solution maximizes the use of range images and RV and BEV residual maps, significantly enhancing the performance of LiDAR-based MOS task. Our method achieved leading IoU (%) scores of 77.5% and 79.2% on the validation and test sets of the SemanticKITTI dataset. In particular, CV-MOS demonstrates SOTA performance to date on various datasets. The CV-MOS implementation is available at https://github.com/SCNU-RISLAB/CV-MOS.
引用
收藏
页数:10
相关论文
共 29 条
[11]  
Li SY, 2023, IEEE T INSTRUM MEAS, V72, DOI [10.1109/TIM.2023.3267527, 10.1109/TIM.2023.3249247]
[12]   Heterogeneous data fusion and loss function design for tooth point cloud segmentation [J].
Liu, Dongsheng ;
Tian, Yan ;
Zhang, Yujie ;
Gelernter, Judith ;
Wang, Xun .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (20) :17371-17380
[13]   L3-Net: Towards Learning based LiDAR Localization for Autonomous Driving [J].
Lu, Weixin ;
Zhou, Yao ;
Wan, Guowei ;
Hou, Shenhua ;
Song, Shiyu .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :6382-6391
[14]   CVTNet: A Cross-View Transformer Network for LiDAR-Based Place Recognition in Autonomous Driving Environments [J].
Ma, Junyi ;
Xiong, Guangming ;
Xu, Jingyi ;
Chen, Xieyuanli .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) :4039-4048
[15]   Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D Convolutions [J].
Mersch, Benedikt ;
Chen, Xieyuanli ;
Vizzo, Ignacio ;
Nunes, Lucas ;
Behley, Jens ;
Stachniss, Cyrill .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) :7503-7510
[16]  
Mohapatra S, 2022, Arxiv, DOI arXiv:2111.04875
[17]  
Peters L, 2021, Arxiv, DOI arXiv:2106.03611
[18]  
Qiu HB, 2022, Arxiv, DOI arXiv:2207.02605
[19]  
Song T., 2024, IEEE Trans. Instrum. Meas., V73, P1
[20]   Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation [J].
Sun, Jiadai ;
Dai, Yuchao ;
Zhang, Xianjing ;
Xu, Jintao ;
Ai, Rui ;
Gu, Weihao ;
Chen, Xieyuanli .
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, :11456-11463