Contrastive Multimodal Fusion with TupleInfoNCE

被引:36
作者
Liu, Yunze [1 ,7 ]
Fan, Qingnan [3 ]
Zhang, Shanghang [4 ]
Dong, Hao [5 ,6 ,8 ]
Funkhouser, Thomas [2 ]
Yi, Li [1 ,2 ]
机构
[1] Tsinghua Univ, IIIS, Beijing, Peoples R China
[2] Google Res, Mountain View, CA USA
[3] Stanford Univ, Stanford, CA 94305 USA
[4] Univ Calif Berkeley, Berkeley, CA USA
[5] Peking Univ, CS Dept, CFCS, Beijing, Peoples R China
[6] Peking Univ, AIIT, Beijing, Peoples R China
[7] Xidian Univ, Xian, Peoples R China
[8] Peng Cheng Lab, Shenzhen, Peoples R China
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.00079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a method for representation learning of multimodal data using contrastive losses. A traditional approach is to contrast different modalities to learn the information shared among them. However, that approach could fail to learn the complementary synergies between modalities that might be useful for downstream tasks. Another approach is to concatenate all the modalities into a tuple and then contrast positive and negative tuple correspondences. However, that approach could consider only the stronger modalities while ignoring the weaker ones. To address these issues, we propose a novel contrastive learning objective, TupleInfoNCE. It contrasts tuples based not only on positive and negative correspondences, but also by composing new negative tuples using modalities describing different scenes. Training with these additional negatives encourages the learning model to examine the correspondences among modalities in the same tuple, ensuring that weak modalities are not ignored. We provide a theoretical justification based on mutual-information for why this approach works, and we propose a sample optimization algorithm to generate positive and negative samples to maximize training efficacy. We find that TupleInfoNCE significantly outperforms previous state of the arts on three different downstream tasks.
引用
收藏
页码:734 / 743
页数:10
相关论文
共 37 条
[31]   Learning and Using the Arrow of Time [J].
Wei, Donglai ;
Lim, Joseph ;
Zisserman, Andrew ;
Freeman, William T. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8052-8060
[32]  
WILLIAMS RJ, 1992, MACH LEARN, V8, P229, DOI 10.1007/BF00992696
[33]  
Wu M., 2020, Evaluation of Inference Attack Models for Deep Learning on Medical Data
[34]   PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding [J].
Xie, Saining ;
Gu, Jiatao ;
Guo, Demi ;
Qi, Charles R. ;
Guibas, Leonidas ;
Litany, Or .
COMPUTER VISION - ECCV 2020, PT III, 2020, 12348 :574-591
[35]  
Zadeh A, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P2236
[36]   Multimodal Sentiment Intensity Analysis in Videos: Facial Gestures and Verbal Messages [J].
Zadeh, Amir ;
Zellers, Rowan ;
Pincus, Eli ;
Morency, Louis-Philippe .
IEEE INTELLIGENT SYSTEMS, 2016, 31 (06) :82-88
[37]   Learning Transferable Architectures for Scalable Image Recognition [J].
Zoph, Barret ;
Vasudevan, Vijay ;
Shlens, Jonathon ;
Le, Quoc V. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8697-8710