CLIP goes 3D: Leveraging Prompt Tuning for Language Grounded 3D Recognition

被引:23
作者
Hegde, Deepti [1 ]
Valanarasu, Jeya Maria Jose [1 ]
Patel, Vishal M. [1 ]
机构
[1] Johns Hopkins Univ, Baltimore, MD 21205 USA
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW | 2023年
关键词
D O I
10.1109/ICCVW60793.2023.00217
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision-Language models like CLIP have been widely adopted for various tasks due to their impressive zero-shot capabilities. However, CLIP is not suitable for extracting 3D geometric features as it was trained on only images and text by natural language supervision. We work on addressing this limitation and propose a new framework termed CG3D (CLIP Goes 3D) where a 3D encoder is learned to exhibit zero-shot capabilities. CG3D is trained using triplets of pointclouds, corresponding rendered 2D images, and texts using natural language supervision. To align the features in a multimodal embedding space, we utilize contrastive loss on 3D features obtained from the 3D encoder, as well as visual and text features extracted from CLIP. We note that the natural images used to train CLIP and the rendered 2D images in CG3D have a distribution shift. Attempting to train the visual and text encoder to account for this shift results in catastrophic forgetting and a notable decrease in performance. To solve this, we employ prompt tuning and introduce trainable parameters in the input space to shift CLIP towards the 3D pre-training dataset utilized in CG3D. We extensively test our pre-trained CG3D framework and demonstrate its impressive capabilities in zero-shot, open scene understanding, and retrieval tasks. Further, it also serves as strong starting weights for fine-tuning in downstream 3D recognition tasks. Code: https://github.com/deeptibhegde/CLIPgoes-3D
引用
收藏
页码:2020 / 2030
页数:11
相关论文
共 71 条
[11]  
Dosovitskiy A., 2020, ICLR 2021
[12]  
Goel Shashank, 2022, ARXIV220514459
[13]  
Gu Xiuye, 2021, ABS210413921 CORR
[14]   PCT: Point cloud transformer [J].
Guo, Meng-Hao ;
Cai, Jun-Xiong ;
Liu, Zheng-Ning ;
Mu, Tai-Jiang ;
Martin, Ralph R. ;
Hu, Shi-Min .
COMPUTATIONAL VISUAL MEDIA, 2021, 7 (02) :187-199
[15]   Visual Prompt Tuning [J].
Jia, Menglin ;
Tang, Luming ;
Chen, Bor-Chun ;
Cardie, Claire ;
Belongie, Serge ;
Hariharan, Bharath ;
Lim, Ser-Nam .
COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 :709-727
[16]  
Kästner L, 2020, IEEE INT CONF ROBOT, P1135, DOI [10.1109/icra40945.2020.9197155, 10.1109/ICRA40945.2020.9197155]
[17]  
Koniarski Konrad, 2022, Advances in Systems Engineering: Proceedings of the 28th International Conference on Systems Engineering, ICSEng 2021, Poland. Lecture Notes in Networks and Systems (364), P418, DOI 10.1007/978-3-030-92604-5_37
[18]   Less is More: CLIPBERT for Video-and-Language Learning via Sparse Sampling [J].
Lei, Jie ;
Li, Linjie ;
Zhou, Luowei ;
Gan, Zhe ;
Berg, Tamara L. ;
Bansal, Mohit ;
Liu, Jingjing .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :7327-7337
[19]  
Lester Brian, 2021, ARXIV210408691
[20]  
Li J., 2021, Adv. Neural Inf. Process. Syst., P9694