Contrastive Multimodal Fusion with TupleInfoNCE

被引:36
作者
Liu, Yunze [1 ,7 ]
Fan, Qingnan [3 ]
Zhang, Shanghang [4 ]
Dong, Hao [5 ,6 ,8 ]
Funkhouser, Thomas [2 ]
Yi, Li [1 ,2 ]
机构
[1] Tsinghua Univ, IIIS, Beijing, Peoples R China
[2] Google Res, Mountain View, CA USA
[3] Stanford Univ, Stanford, CA 94305 USA
[4] Univ Calif Berkeley, Berkeley, CA USA
[5] Peking Univ, CS Dept, CFCS, Beijing, Peoples R China
[6] Peking Univ, AIIT, Beijing, Peoples R China
[7] Xidian Univ, Xian, Peoples R China
[8] Peng Cheng Lab, Shenzhen, Peoples R China
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.00079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a method for representation learning of multimodal data using contrastive losses. A traditional approach is to contrast different modalities to learn the information shared among them. However, that approach could fail to learn the complementary synergies between modalities that might be useful for downstream tasks. Another approach is to concatenate all the modalities into a tuple and then contrast positive and negative tuple correspondences. However, that approach could consider only the stronger modalities while ignoring the weaker ones. To address these issues, we propose a novel contrastive learning objective, TupleInfoNCE. It contrasts tuples based not only on positive and negative correspondences, but also by composing new negative tuples using modalities describing different scenes. Training with these additional negatives encourages the learning model to examine the correspondences among modalities in the same tuple, ensuring that weak modalities are not ignored. We provide a theoretical justification based on mutual-information for why this approach works, and we propose a sample optimization algorithm to generate positive and negative samples to maximize training efficacy. We find that TupleInfoNCE significantly outperforms previous state of the arts on three different downstream tasks.
引用
收藏
页码:734 / 743
页数:10
相关论文
共 37 条
[1]  
Alayrac Jean-Baptiste, 2020, ARXIV PREPRINT ARXIV
[2]  
[Anonymous], 2020, Advances in Neural Information Processing Systems
[3]   Objects that Sound [J].
Arandjelovic, Relja ;
Zisserman, Andrew .
COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 :451-466
[4]  
Baker B., 2016, P INT C LEARN REPR I
[5]  
Chen T, 2020, PR MACH LEARN RES, V119
[6]  
Chuang Ching-Yao, 2020, ARXIV PREPRINT ARXIV
[7]   Out of Time: Automated Lip Sync in the Wild [J].
Chung, Joon Son ;
Zisserman, Andrew .
COMPUTER VISION - ACCV 2016 WORKSHOPS, PT II, 2017, 10117 :251-263
[8]  
Cubuk Ekin D, 2018, AutoAugment: Learning Augmentation Policies from Data
[9]   Self-Supervised Video Representation Learning With Odd-One-Out Networks [J].
Fernando, Basura ;
Bilen, Hakan ;
Gavves, Efstratios ;
Gould, Stephen .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5729-5738
[10]   MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis [J].
Hazarika, Devamanyu ;
Zimmermann, Roger ;
Poria, Soujanya .
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, :1122-1131