Adversarial Unsupervised Domain Adaptation for 3D Semantic Segmentation with 2D Image Fusion of Dense Depth

被引:0
|
作者
Zhang, Xindan [1 ,2 ]
Li, Ying [1 ,2 ]
Sheng, Huankun [1 ,2 ]
Zhang, Xinnian [3 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Changchun, Peoples R China
[2] Jilin Univ, Minist Educ, Key Lab Symbol Computat & Knowledge Engn, Changchun, Peoples R China
[3] Ajou Univ, Grad Sch Informat & Commun Technol, Suwon, South Korea
关键词
This work was supported by the Natural Science Foundation of Jilin Province; China; (20240101366JC);
D O I
10.1111/cgf.15250
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Unsupervised domain adaptation (UDA) is increasingly used for 3D point cloud semantic segmentation tasks due to its ability to address the issue of missing labels for new domains. However, most existing unsupervised domain adaptation methods focus only on uni-modal data and are rarely applied to multi-modal data. Therefore, we propose a cross-modal UDA on multi-modal datasets that contain 3D point clouds and 2D images for 3D Semantic Segmentation. Specifically, we first propose a Dual discriminator-based Domain Adaptation (Dd-bDA) module to enhance the adaptability of different domains. Second, given that the robustness of depth information to domain shifts can provide more details for semantic segmentation, we further employ a Dense depth Feature Fusion (DdFF) module to extract image features with rich depth cues. We evaluate our model in four unsupervised domain adaptation scenarios, i.e., dataset-to-dataset (A2D2 -> SemanticKITTI), Day-to-Night, country-to-country (USA -> Singapore), and synthetic-to-real (VirtualKITTI -> SemanticKITTI). In all settings, the experimental results achieve significant improvements and surpass state-of-the-art models.
引用
收藏
页数:11
相关论文
empty
未找到相关数据