Enhancing Point Cloud Semantic Segmentation with Curriculum Learning for Occlusion Handling

被引:0
作者
Choi, Seokwon [1 ]
Park, Minseong [2 ]
Cho, Minho [2 ]
Oh, Jangwon [2 ]
Kim, Kayeon [1 ]
Kim, Euntai [2 ]
机构
[1] Yonsei Univ, Dept Vehicle Convergence Engn, Seoul 03722, South Korea
[2] Yonsei Univ, Dept Elect & Elect Engn, Seoul 03722, South Korea
来源
2024 24TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS, ICCAS 2024 | 2024年
关键词
Point Cloud Semantic Segmentation; Data Augmentation; Occlusion Handling; Autonomous Robot;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Point cloud semantic segmentation is a critical task in 3D computer vision, particularly in applications such as autonomous robot. However, occlusions present a significant challenge, leading to incomplete data and reduced segmentation accuracy. In this paper, we propose a novel method that combines artificial occlusion with curriculum learning to enhance the robustness of segmentation models. Using the SPVCNN[2] model and the SemanticKITTI[3] dataset, we demonstrate that our approach significantly improves performance. Our curriculum learning strategy gradually increases the intensity and frequency of occlusions during training, enabling the model to better handle occluded regions. Experimental results show that our method achieves a improvement over the baseline method. These findings underscore the effectiveness of our approach in improving the accuracy and robustness of point cloud semantic segmentation.
引用
收藏
页码:100 / 101
页数:2
相关论文
共 3 条
[1]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[2]   Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution [J].
Tang, Haotian ;
Liu, Zhijian ;
Zhao, Shengyu ;
Lin, Yujun ;
Lin, Ji ;
Wang, Hanrui ;
Han, Song .
COMPUTER VISION - ECCV 2020, PT XXVIII, 2020, 12373 :685-702
[3]  
Xiao AR, 2022, ADV NEUR IN