Multi-Modal 3D Shape Clustering with Dual Contrastive Learning

被引:5
作者
Lin, Guoting [1 ]
Zheng, Zexun [1 ]
Chen, Lin [1 ]
Qin, Tianyi [1 ]
Song, Jiahui [1 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 15期
基金
中国博士后科学基金;
关键词
multi-modal clustering; unsupervised learning; 3D shapes; contrastive learning;
D O I
10.3390/app12157384
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
3D shape clustering is developing into an important research subject with the wide applications of 3D shapes in computer vision and multimedia fields. Since 3D shapes generally take on various modalities, how to comprehensively exploit the multi-modal properties to boost clustering performance has become a key issue for the 3D shape clustering task. Taking into account the advantages of multiple views and point clouds, this paper proposes the first multi-modal 3D shape clustering method, named the dual contrastive learning network (DCL-Net), to discover the clustering partitions of unlabeled 3D shapes. First, by simultaneously performing cross-view contrastive learning within multi-view modality and cross-modal contrastive learning between the point cloud and multi-view modalities in the representation space, a representation-level dual contrastive learning module is developed, which aims to capture discriminative 3D shape features for clustering. Meanwhile, an assignment-level dual contrastive learning module is designed by further ensuring the consistency of clustering assignments within the multi-view modality, as well as between the point cloud and multi-view modalities, thus obtaining more compact clustering partitions. Experiments on two commonly used 3D shape benchmarks demonstrate the effectiveness of the proposed DCL-Net.
引用
收藏
页数:13
相关论文
共 57 条
  • [1] Deep Multimodal Subspace Clustering Networks
    Abavisani, Mahdi
    Patel, Vishal M.
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (06) : 1601 - 1614
  • [2] Achlioptas P, 2018, PR MACH LEARN RES, V80
  • [3] Afham M., 2022, P IEEE C COMPUTER VI
  • [4] Andrew G., 2013, P 30 INT C MACH LEAR, V28, P2284
  • [5] Representation Learning: A Review and New Perspectives
    Bengio, Yoshua
    Courville, Aaron
    Vincent, Pascal
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) : 1798 - 1828
  • [6] 3D shape recognition and retrieval based on multi-modality deep learning
    Bu, Shuhui
    Wang, Lei
    Han, Pengcheng
    Liu, Zhenbao
    Li, Ke
    [J]. NEUROCOMPUTING, 2017, 259 : 183 - 193
  • [7] Chang JL, 2017, IEEE I CONF COMP VIS, P5880, DOI [10.1109/ICCV.2017.626, 10.1109/ICCV.2017.627]
  • [8] VERAM: View-Enhanced Recurrent Attention Model for 3D Shape Classification
    Chen, Songle
    Zheng, Lintao
    Zhang, Yan
    Sun, Zhixin
    Xu, Kai
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2019, 25 (12) : 3244 - 3257
  • [9] Chen T, 2020, PR MACH LEARN RES, V119
  • [10] Exploring Simple Siamese Representation Learning
    Chen, Xinlei
    He, Kaiming
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15745 - 15753