ContrastMotion: Self-supervised Scene Motion Learning for Large-Scale LiDAR Point Clouds

被引:0
|
作者
Jia, Xiangze [1 ]
Zhou, Hui [2 ]
Zhu, Xinge [2 ]
Guo, Yandong [3 ]
Zhang, Ji [1 ,4 ]
Ma, Yuexin [5 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Nanjing, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[3] OPPO Res Inst, Chengdu, Peoples R China
[4] Zhejiang Lab, Hangzhou, Peoples R China
[5] ShanghaiTech Univ, Shanghai, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a novel self-supervised motion estimator for LiDAR-based autonomous driving via BEV representation. Different from usually adopted self-supervised strategies for data-level structure consistency, we predict scene motion via feature-level consistency between pillars in consecutive frames, which can eliminate the effect caused by noise points and view-changing point clouds in dynamic scenes. Specifically, we propose Soft Discriminative Loss that provides the network with more pseudo-supervised signals to learn discriminative and robust features in a contrastive learning manner. We also propose Gated Multi-frame Fusion block that learns valid compensation between point cloud frames automatically to enhance feature extraction. Finally, pillar association is proposed to predict pillar correspondence probabilities based on feature distance, and whereby further predicts scene motion. Extensive experiments show the effectiveness and superiority of our ContrastMotion on both scene flow and motion prediction tasks.
引用
收藏
页码:929 / 937
页数:9
相关论文
共 50 条
  • [41] Self-Supervised Learning for Heterogeneous Audiovisual Scene Analysis
    Hu, Di
    Wang, Zheng
    Nie, Feiping
    Wang, Rong
    Li, Xuelong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 3534 - 3545
  • [42] Self-supervised Learning of LiDAR Odometry for Robotic Applications
    Nubert, Julian
    Khattak, Shehryar
    Hutter, Marco
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 9601 - 9607
  • [43] Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit Neural Representation
    Zhao, Wenbo
    Liu, Xianming
    Zhong, Zhiwei
    Jiang, Junjun
    Gao, Wei
    Li, Ge
    Ji, Xiangyang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1998 - 2006
  • [44] Label-efficient semantic segmentation of large-scale industrial point clouds using weakly supervised learning
    Yin, Chao
    Yang, Bo
    Cheng, Jack C. P.
    Gan, Vincent J. L.
    Wang, Boyu
    Yang, Ji
    AUTOMATION IN CONSTRUCTION, 2023, 148
  • [45] FGSS: Federated global self-supervised framework for large-scale unlabeled data
    Zhang, Chen
    Xie, Zixuan
    Yu, Bin
    Wen, Chao
    Xie, Yu
    APPLIED SOFT COMPUTING, 2023, 143
  • [46] SuP-SLiP: Subsampled Processing of Large-scale Static LIDAR Point Clouds
    Jaisankar, Vijay
    Sreevalsan-Nair, Jaya
    PROCEEDINGS OF THE 3RD ACM SIGSPATIAL INTERNATIONAL WORKSHOP ON SEARCHING AND MINING LARGE COLLECTIONS OF GEOSPATIAL DATA, GEOSEARCH 2024, 2024, : 40 - 47
  • [47] Self-Supervised Learning for 3-D Point Clouds Based on a Masked Linear Autoencoder
    Yang, Hongxin
    Wang, Ruisheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61 : 1 - 11
  • [48] Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds
    Huang, Siyuan
    Degrees, Yichen Xie
    Zhu, Song-Chun
    Zhu, Yixin
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6515 - 6525
  • [49] Large-Scale Volumetric Scene Reconstruction using LiDAR
    Kuehner, Tilman
    Kuemmerle, Julius
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 6261 - 6267
  • [50] Learning Semantic Segmentation of Large-Scale Point Clouds With Random Sampling
    Hu, Qingyong
    Yang, Bo
    Xie, Linhai
    Rosa, Stefano
    Guo, Yulan
    Wang, Zhihua
    Trigoni, Niki
    Markham, Andrew
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 8338 - 8354