Emotion Recognition With Audio, Video, EEG, and EMG: A Dataset and Baseline Approaches

被引:35
作者
Chen, Jin [1 ]
Ro, Tony [2 ,3 ,4 ]
Zhu, Zhigang [1 ,5 ]
机构
[1] CUNY, Comp Sci Dept, New York, NY 10031 USA
[2] CUNY, Grad Ctr, Program Psychol, New York, NY 10016 USA
[3] CUNY, Grad Ctr, Program Biol, New York, NY 10016 USA
[4] CUNY, Grad Ctr, Program Cognit Neurosci, New York, NY 10016 USA
[5] CUNY, Grad Ctr, Doctoral Program Comp Sci, New York, NY 10016 USA
基金
美国国家科学基金会;
关键词
Electroencephalography; Feature extraction; Videos; Support vector machines; Physiology; Emotion recognition; Electromyography; data collection; electroencephalography; electromyography; SIGNAL; LSTM;
D O I
10.1109/ACCESS.2022.3146729
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper describes a new posed multimodal emotional dataset and compares human emotion classification based on four different modalities - audio, video, electromyography (EMG), and electroencephalography (EEG). The results are reported with several baseline approaches using various feature extraction techniques and machine-learning algorithms. First, we collected a dataset from 11 human subjects expressing six basic emotions and one neutral emotion. We then extracted features from each modality using principal component analysis, autoencoder, convolution network, and mel-frequency cepstral coefficient (MFCC), some unique to individual modalities. A number of baseline models have been applied to compare the classification performance in emotion recognition, including k-nearest neighbors (KNN), support vector machines (SVM), random forest, multilayer perceptron (MLP), long short-term memory (LSTM) model, and convolutional neural network (CNN). Our results show that bootstrapping the biosensor signals (i.e., EMG and EEG) can greatly increase emotion classification performance by reducing noise. In contrast, the best classification results were obtained by a traditional KNN, whereas audio and image sequences of human emotions could be better classified using LSTM.
引用
收藏
页码:13229 / 13242
页数:14
相关论文
共 59 条
[51]   Comparing feature sets for acted and spontaneous speech in view of automatic emotion recognition [J].
Vogt, T ;
André, E .
2005 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), VOLS 1 AND 2, 2005, :474-477
[52]  
Wagner P., 2012, FACE RECOGNITION PYT
[53]   Capturing Complex Spatio-Temporal Relations among Facial Muscles for Facial Expression Recognition [J].
Wang, Ziheng ;
Wang, Shangfei ;
Ji, Qiang .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :3422-3429
[54]  
Xie L, 2006, PROCEEDINGS OF 2006 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, P4359
[55]  
Yan B, 2016, IEEE INT CONF COMMUN
[56]   A Comparative Study of PCA, LDA and Kernel LDA for Image Classification [J].
Ye, Fei ;
Shi, Zhiping ;
Shi, Zhongzhi .
2009 INTERNATIONAL SYMPOSIUM ON UBIQUITOUS VIRTUAL REALITY (ISUVR 2009), 2009, :51-54
[57]   A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions [J].
Zeng, Zhihong ;
Pantic, Maja ;
Roisman, Glenn I. ;
Huang, Thomas S. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2009, 31 (01) :39-58
[58]   Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks [J].
Zhang, Kaipeng ;
Zhang, Zhanpeng ;
Li, Zhifeng ;
Qiao, Yu .
IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (10) :1499-1503
[59]   Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks [J].
Zheng, Wei-Long ;
Lu, Bao-Liang .
IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, 2015, 7 (03) :162-175