Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap

被引:6
作者
Chen, Yongwei [1 ,2 ]
Wang, Zihao [1 ]
Zou, Longkun [1 ]
Chen, Ke [1 ,3 ]
Jia, Kui [1 ,3 ]
机构
[1] South China Univ Technol, Guangzhou, Peoples R China
[2] DexForce Co Ltd, Shenzhen, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
来源
COMPUTER VISION - ECCV 2022, PT XXXIII | 2022年 / 13693卷
基金
中国国家自然科学基金;
关键词
D O I
10.1007/978-3-031-19827-4_42
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semantic analyses of object point clouds are largely driven by releasing of benchmarking datasets, including synthetic ones whose instances are sampled from object CAD models. However, learning from synthetic data may not generalize to practical scenarios, where point clouds are typically incomplete, non-uniformly distributed, and noisy. Such a challenge of Simulation-to-Reality (Sim2Real) domain gap could be mitigated via learning algorithms of domain adaptation; however, we argue that generation of synthetic point clouds via more physically realistic rendering is a powerful alternative, as systematic non-uniform noise patterns can be captured. To this end, we propose an integrated scheme consisting of physically realistic synthesis of object point clouds via rendering stereo images via projection of speckle patterns onto CAD models and a novel quasi-balanced self-training designed for more balanced data distribution by sparsity-driven selection of pseudo labeled samples for long tailed classes. Experiment results can verify the effectiveness of our method as well as both of its modules for unsupervised domain adaptation on point cloud classification, achieving the state-of-the-art performance. Source codes and the SpeckleNet synthetic dataset are available at https://github.com/Gorilla-Lab-SCUT/QS3.
引用
收藏
页码:728 / 745
页数:18
相关论文
共 59 条
[1]   Self-Supervised Learning for Domain Adaptation on Point Clouds [J].
Achituve, Idan ;
Maron, Haggai ;
Chechik, Gal .
2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, :123-133
[2]  
Ajakan H., 2014, stat, V1050, P15
[3]  
Arazo E., 2020, Int J Conf Neural Netw, P1, DOI DOI 10.48550/ARXIV.1908.02983
[4]   Intrinsic Scene Properties from a Single RGB-D Image [J].
Barron, Jonathan T. ;
Malik, Jitendra .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :17-24
[5]  
Bartell F. O., 1980, Proceedings of the Society of Photo-Optical Instrumentation Engineers, V257, P154
[6]  
Blender Online Community, 2018, Blender-A 3D modelling and rendering package, P6
[7]  
Bohg J, 2014, IEEE INT CONF ROBOT, P3143, DOI 10.1109/ICRA.2014.6907311
[8]   FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose Estimation with Decoupled Rotation Mechanism [J].
Chen, Wei ;
Jia, Xi ;
Chang, Hyung Jin ;
Duan, Jinming ;
Shen, Linlin ;
Leonardis, Ales .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :1581-1590
[9]   ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes [J].
Dai, Angela ;
Chang, Angel X. ;
Savva, Manolis ;
Halber, Maciej ;
Funkhouser, Thomas ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2432-2443
[10]  
Deng S., 2022, P IEEE C COMP VIS PA, P8448