Emotion Recognition With Audio, Video, EEG, and EMG: A Dataset and Baseline Approaches

被引:35
作者
Chen, Jin [1 ]
Ro, Tony [2 ,3 ,4 ]
Zhu, Zhigang [1 ,5 ]
机构
[1] CUNY, Comp Sci Dept, New York, NY 10031 USA
[2] CUNY, Grad Ctr, Program Psychol, New York, NY 10016 USA
[3] CUNY, Grad Ctr, Program Biol, New York, NY 10016 USA
[4] CUNY, Grad Ctr, Program Cognit Neurosci, New York, NY 10016 USA
[5] CUNY, Grad Ctr, Doctoral Program Comp Sci, New York, NY 10016 USA
基金
美国国家科学基金会;
关键词
Electroencephalography; Feature extraction; Videos; Support vector machines; Physiology; Emotion recognition; Electromyography; data collection; electroencephalography; electromyography; SIGNAL; LSTM;
D O I
10.1109/ACCESS.2022.3146729
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper describes a new posed multimodal emotional dataset and compares human emotion classification based on four different modalities - audio, video, electromyography (EMG), and electroencephalography (EEG). The results are reported with several baseline approaches using various feature extraction techniques and machine-learning algorithms. First, we collected a dataset from 11 human subjects expressing six basic emotions and one neutral emotion. We then extracted features from each modality using principal component analysis, autoencoder, convolution network, and mel-frequency cepstral coefficient (MFCC), some unique to individual modalities. A number of baseline models have been applied to compare the classification performance in emotion recognition, including k-nearest neighbors (KNN), support vector machines (SVM), random forest, multilayer perceptron (MLP), long short-term memory (LSTM) model, and convolutional neural network (CNN). Our results show that bootstrapping the biosensor signals (i.e., EMG and EEG) can greatly increase emotion classification performance by reducing noise. In contrast, the best classification results were obtained by a traditional KNN, whereas audio and image sequences of human emotions could be better classified using LSTM.
引用
收藏
页码:13229 / 13242
页数:14
相关论文
共 59 条
[1]   DECAF: MEG-Based Multimodal Database for Decoding Affective Physiological Responses [J].
Abadi, Mojtaba Khomami ;
Subramanian, Ramanathan ;
Kia, Seyed Mostafa ;
Avesani, Paolo ;
Patras, Ioannis ;
Sebe, Nicu .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2015, 6 (03) :209-222
[2]   Principal component analysis [J].
Abdi, Herve ;
Williams, Lynne J. .
WILEY INTERDISCIPLINARY REVIEWS-COMPUTATIONAL STATISTICS, 2010, 2 (04) :433-459
[3]   Emotion Analysis Using Audio/Video, EMG and EEG: A Dataset and Comparison Study [J].
Abtahi, Farnaz ;
Ro, Tony ;
Li, Wei ;
Zhu, Zhigang .
2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, :10-19
[4]  
Alhagry S, 2017, INT J ADV COMPUT SC, V8, P355, DOI 10.14569/IJACSA.2017.081046
[5]  
[Anonymous], 2010, 2010 IEEE COMPUTER S, DOI [10. 1109/CVPRW.2010.5543262, DOI 10.1109/CVPRW.2010.5543262]
[6]  
[Anonymous], 2013, Int. J. Adv. Res. Eng. Technol
[7]  
Balakrishnama S., 1998, Inst. Signal Inf. Process, V18, P1, DOI 10.1109/IJCNN.2000.861335
[8]  
Dahake PP, 2016, 2016 INTERNATIONAL CONFERENCE ON AUTOMATIC CONTROL AND DYNAMIC OPTIMIZATION TECHNIQUES (ICACDOT), P1080, DOI 10.1109/ICACDOT.2016.7877753
[9]  
Doukhan D, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P5214, DOI 10.1109/ICASSP.2018.8461471
[10]   AN ARGUMENT FOR BASIC EMOTIONS [J].
EKMAN, P .
COGNITION & EMOTION, 1992, 6 (3-4) :169-200