JM3D & JM3D-LLM: Elevating 3D Representation With Joint Multi-Modal Cues

被引:1
作者
Ji, Jiayi [1 ,2 ]
Wang, Haowei [3 ]
Wu, Changli [1 ]
Ma, Yiwei [1 ]
Sun, Xiaoshuai [1 ]
Ji, Rongrong [1 ]
机构
[1] Xiamen Univ, Key Lab Multimedia Trusted Percept & Efficient Com, Minist Educ China, Xiamen 361005, Peoples R China
[2] Natl Univ Singapore, Singapore 119077, Singapore
[3] Tencent, Youtu Lab, Shanghai 200000, Peoples R China
基金
中国博士后科学基金; 国家重点研发计划; 中国国家自然科学基金;
关键词
Three-dimensional displays; Solid modeling; Point cloud compression; Visualization; Representation learning; Feature extraction; Large language models; Data models; Degradation; Contrastive learning; 3D representation learning; joint multi-modal alignment; large language model; structured multimodal organizer;
D O I
10.1109/TPAMI.2024.3523675
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rising importance of 3D representation learning, pivotal in computer vision, autonomous driving, and robotics, is evident. However, a prevailing trend, which straightforwardly resorted to transferring 2D alignment strategies to the 3D domain, encounters three distinct challenges: (1) Information Degradation: This arises from the alignment of 3D data with mere single-view 2D images and generic texts, neglecting the need for multi-view images and detailed subcategory texts. (2) Insufficient Synergy: These strategies align 3D representations to image and text features individually, hampering the overall optimization for 3D models. (3) Underutilization: The fine-grained information inherent in the learned representations is often not fully exploited, indicating a potential loss in detail. To address these issues, we introduce JM3D, a comprehensive approach integrating point cloud, text, and image. Key contributions include the Structured Multimodal Organizer (SMO), enriching vision-language representation with multiple views and hierarchical text, and the Joint Multi-modal Alignment (JMA), combining language understanding with visual representation. Our advanced model, JM3D-LLM, marries 3D representation with large language models via efficient fine-tuning. Evaluations on ModelNet40 and ScanObjectNN establish JM3D's superiority. The superior performance of JM3D-LLM further underscores the effectiveness of our representation transfer approach.
引用
收藏
页码:2475 / 2492
页数:18
相关论文
共 88 条
[41]   SLIP: Self-supervision Meets Language-Image Pre-training [J].
Mu, Norman ;
Kirillov, Alexander ;
Wagner, David ;
Xie, Saining .
COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 :529-544
[42]  
Ouyang L, 2022, ADV NEUR IN
[43]   Masked Autoencoders for Point Cloud Self-supervised Learning [J].
Pang, Yatian ;
Wang, Wenxiao ;
Tay, Francis E. H. ;
Liu, Wei ;
Tian, Yonghong ;
Yuan, Li .
COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 :604-621
[44]  
Petroni F, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P2463
[45]  
Qi C. R., 2017, Advances in Neural Information Processing Systems, V30, P5105
[46]   PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation [J].
Qi, Charles R. ;
Su, Hao ;
Mo, Kaichun ;
Guibas, Leonidas J. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :77-85
[47]  
Qian GC, 2022, ADV NEUR IN
[48]  
Radford A., 2018, Improving language understanding by generative pre-training
[49]  
Radford A., 2019, OpenAI Blog, V1
[50]  
Radford A, 2021, PR MACH LEARN RES, V139