MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis

被引:545
作者
Hazarika, Devamanyu [1 ]
Zimmermann, Roger [1 ]
Poria, Soujanya [2 ]
机构
[1] Natl Univ Singapore, Sch Comp, Singapore, Singapore
[2] Singapore Univ Technol & Design, ISTD, Singapore, Singapore
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
关键词
multimodal sentiment analysis; multimodal representation learning; FUSION NETWORK;
D O I
10.1145/3394171.3413678
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality-invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal lumor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.
引用
收藏
页码:1122 / 1131
页数:10
相关论文
共 63 条
[1]  
Aharoni Roee, 2020, P ACL
[2]  
Akhtar MS, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P370
[3]  
Andrew G., 2013, INT C MACHINE LEARNI, V28, P1247
[4]  
[Anonymous], 2011, INTERSPEECH, DOI DOI 10.1111/J.1096-3642.2009.00621.X
[5]  
Baltrusaitis Tadas, 2016, WACV, P1, DOI [DOI 10.1109/WACV.2016.7477553, 10.1109/WACV.2016.7477553]
[6]  
Bousmalis K., 2016, P INT C NEUR INF PRO, P343
[7]  
Chauhan DS, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P5647
[8]  
Chen Feiyang, 2019, TECHNICAL REPORT
[9]  
Chen MH, 2017, PROCEEDINGS OF THE 19TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2017, P163, DOI 10.1145/3136755.3136801
[10]  
Degottex G, 2014, INT CONF ACOUST SPEE, DOI 10.1109/ICASSP.2014.6853739