Soft Voting Strategy for Multi-Modal Emotion Recognition Using Deep-learning- Facial Images and EEG

被引:1
作者
Chinta, Uma [1 ]
Kalita, Jugal [1 ]
Atyabi, Adham [1 ]
机构
[1] Univ Colorado, Dept Comp Sci, Colorado Springs, CO 80907 USA
来源
2023 IEEE 13TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE, CCWC | 2023年
关键词
EEG; feature extraction; emotion analysis; multi-modal integration; Gated Recurrent Unit; FUSION;
D O I
10.1109/CCWC57344.2023.10099070
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotion recognition is an important factor in social communication and has a wide range of applications from retail to healthcare. In psychology, emotion recognition focuses on emotional states within non-verbal visual and auditory cues. It is essential to the human ability to associate meaning with events rather than treating them as mere facts. Studies of emotion recognition often utilize data gathered in response to non-verbal cues using modalities such as eye tracking, Electroencephalogram (EEG), and facial video and build classification models capable of differentiating responses to various emotions and cues. The accuracy of these emotion recognition models largely depends on feature representation and the suitability of the chosen features in magnifying the differences between patterns of various emotions. Single-modal feature extraction methods are limited in capturing between-group differences and often result in reduced classification performance. To address this problem, this paper proposes a multi-modal approach in the representation of response to emotional cues involving EEG recording and facial video data. The study utilizes the dataset containing frontal face video recordings and EEG data of 22 participants. A novel deep neural network architecture within the feature level is used to efficiently predict emotions using EEG and facial video data. The experimental result indicates 97.5% accuracy in identifying facial expressions and categorizing them into two classes, arousal (class 0) and valence (class 1), surpassing state-of-the-art for the DEAP dataset.
引用
收藏
页码:738 / 745
页数:8
相关论文
共 57 条
[1]  
Agarwal S, 2015, DECISION, V42, P457, DOI 10.1007/s40622-015-0113-1
[2]  
Al-Qammaz A. Y. A., 2019, THESIS U UTARA MALAY
[3]  
Alhagry S, 2017, INT J ADV COMPUT SC, V8, P355, DOI 10.14569/IJACSA.2017.081046
[4]  
Ali M, 2016, INT CONF UBIQ FUTUR, P946, DOI 10.1109/ICUFN.2016.7536936
[5]  
Bhattacharya P, 2021, Arxiv, DOI arXiv:2004.13274
[6]   Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications [J].
Calvo, Rafael A. ;
D'Mello, Sidney .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2010, 1 (01) :18-37
[7]  
Chanel G, 2006, LECT NOTES COMPUT SC, V4105, P530
[8]   Short-term emotion assessment in a recall paradigm [J].
Chanel, Guillaume ;
Kierkels, Joep J. M. ;
Soleymani, Mohammad ;
Pun, Thierry .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2009, 67 (08) :607-627
[9]   Emotion Recognition From Multi-Channel EEG Signals by Exploiting the Deep Belief-Conditional Random Field Framework [J].
Chao, Hao ;
Liu, Yongli .
IEEE ACCESS, 2020, 8 :33002-33012
[10]   Support vector machines for histogram-based image classification [J].
Chapelle, O ;
Haffner, P ;
Vapnik, VN .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (05) :1055-1064