Meaningful Multimodal Emotion Recognition Based on Capsule Graph Transformer Architecture

被引:0
作者
Filali, Hajar [1 ,2 ]
Boulealam, Chafik [1 ]
El Fazazy, Khalid [1 ]
Mahraz, Adnane Mohamed [1 ]
Tairi, Hamid [1 ]
Riffi, Jamal [1 ]
机构
[1] Sidi Mohamed Ben Abdellah Univ, Fac Sci Dhar El Mahraz, Dept Comp Sci, LISAC, Fes 30000, Morocco
[2] ISGA, Lab Innovat Management & Engn Enterprise LIMITE, Fes 30000, Morocco
关键词
emotion recognition; deep learning; graph convolutional network; capsule network; vision transformer; meaningful neural network (MNN); multimodal architecture;
D O I
10.3390/info16010040
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The development of emotionally intelligent computers depends on emotion recognition based on richer multimodal inputs, such as text, speech, and visual cues, as multiple modalities complement one another. The effectiveness of complex relationships between modalities for emotion recognition has been demonstrated, but these relationships are still largely unexplored. Various fusion mechanisms using simply concatenated information have been the mainstay of previous research in learning multimodal representations for emotion classification, rather than fully utilizing the benefits of deep learning. In this paper, a unique deep multimodal emotion model is proposed, which uses the meaningful neural network to learn meaningful multimodal representations while classifying data. Specifically, the proposed model concatenates multimodality inputs using a graph convolutional network to extract acoustic modality, a capsule network to generate the textual modality, and vision transformer to acquire the visual modality. Despite the effectiveness of MNN, we have used it as a methodological innovation that will be fed with the previously generated vector parameters to produce better predictive results. Our suggested approach for more accurate multimodal emotion recognition has been shown through extensive examinations, producing state-of-the-art results with accuracies of 69% and 56% on two public datasets, MELD and MOSEI, respectively.
引用
收藏
页数:22
相关论文
共 45 条
[31]  
Kipf TN, 2017, Arxiv, DOI arXiv:1609.02907
[32]   Transformer based Deep Intelligent Contextual Embedding for Twitter sentiment analysis [J].
Naseem, Usman ;
Razzak, Imran ;
Musial, Katarzyna ;
Imran, Muhammad .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 113 :58-69
[33]   What is a support vector machine? [J].
Noble, William S. .
NATURE BIOTECHNOLOGY, 2006, 24 (12) :1565-1567
[34]  
Ouyang X, 2018, 2018 FIRST ASIAN CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII ASIA)
[35]  
Poria S, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P527
[36]  
Sabour S, 2017, ADV NEUR IN, V30
[37]   Facial Expression Recognition using Decision Trees [J].
Salmam, Fatima Zahra ;
Madani, Abdellah ;
Kissi, Mohamed .
2016 13TH INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS, IMAGING AND VISUALIZATION (CGIV), 2016, :125-130
[38]   The Graph Neural Network Model [J].
Scarselli, Franco ;
Gori, Marco ;
Tsoi, Ah Chung ;
Hagenbuchner, Markus ;
Monfardini, Gabriele .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (01) :61-80
[39]  
Tang SY, 2022, ASIAPAC SIGN INFO PR, P1288, DOI 10.23919/APSIPAASC55919.2022.9979932
[40]   End-to-End Multimodal Emotion Recognition Using Deep Neural Networks [J].
Tzirakis, Panagiotis ;
Trigeorgis, George ;
Nicolaou, Mihalis A. ;
Schuller, Bjorn W. ;
Zafeiriou, Stefanos .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2017, 11 (08) :1301-1309