Exploring Deep Learning Features for Automatic Classification of Human Emotion Using EEG Rhythms

被引:66
作者
Demir, Fatih [1 ]
Sobahi, Nebras [2 ]
Siuly, Siuly [3 ]
Sengur, Abdulkadir [1 ]
机构
[1] Firat Univ, Fac Technol, Dept Elect & Elect Engn, TR-23119 Elazig, Turkey
[2] King Abdulaziz Univ, Dept Elect & Comp Engn, Jeddah 21589, Saudi Arabia
[3] Victoria Univ, Inst Sustainable Ind & Liveable Cities, Footscray, Vic 3011, Australia
关键词
Electroencephalography; Feature extraction; Support vector machines; Erbium; Brain modeling; Continuous wavelet transforms; Filtering; EEG based emotion classification; EEG rhythms; CWT; deep features; pretrained CNN models; RECOGNITION;
D O I
10.1109/JSEN.2021.3070373
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Emotion recognition (ER) from Electroencephalogram (EEG) signals is a challenging task due to the non-linearity and non-stationarity nature of EEG signals. Existing feature extraction methods cannot extract the deep concealed characteristics of EEG signals from different layers for efficient classification scheme and also hard to select appropriate and effective feature extraction methods for different types of EEG data. Hence this study intends to develop an efficient deep feature extraction based method to automatically classify emotion status of people. In order to discover reliable deep features, five deep convolutional neural networks (CNN) models are considered: AlexNet, VGG16, ResNet50, SqueezeNet and MobilNetv2. Pre-processing, Wavelet Transform (WT), and Continuous Wavelet Transform (CWT) are employed to convert the EEG signals into EEG rhythm images then five well-known pretrained CNN models are employed for feature extraction. Finally, the proposed method puts the obtained features as input to the support vector machine (SVM) method for classifying them into binary emotion classes: valence and arousal classes. The DEAP dataset was used in experimental works. The experimental results demonstrate that the AlexNet features with Alpha rhythm produces better accuracy scores (91.07% in channel Oz) than the other deep features for the valence discrimination, and the MobilNetv2 features yields the highest accuracy score (98.93% in Delta rhythm (with channel C3) for arousal discrimination.
引用
收藏
页码:14923 / 14930
页数:8
相关论文
共 35 条
[21]   Emotion and Decision Making [J].
Lerner, Jennifer S. ;
Li, Ye ;
Valdesolo, Piercarlo ;
Kassam, Karim S. .
ANNUAL REVIEW OF PSYCHOLOGY, VOL 66, 2015, 66 :799-823
[22]   Channel Division Based Multiple Classifiers Fusion for Emotion Recognition Using EEG signals [J].
Li, Xian ;
Yan, Jian-Zhuo ;
Chen, Jian-Hui .
2017 INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND TECHNOLOGY (IST 2017), 2017, 11
[23]   EEG-Based Emotion Classification Using a Deep Neural Network and Sparse Autoencoder [J].
Liu, Junxiu ;
Wu, Guopei ;
Luo, Yuling ;
Qiu, Senhui ;
Yang, Su ;
Li, Wei ;
Bi, Yifei .
FRONTIERS IN SYSTEMS NEUROSCIENCE, 2020, 14
[24]  
Polikar R., 1996, The Wavelet Tutorial
[25]  
Rozgic V, 2013, INT CONF ACOUST SPEE, P1286, DOI 10.1109/ICASSP.2013.6637858
[26]   MobileNetV2: Inverted Residuals and Linear Bottlenecks [J].
Sandler, Mark ;
Howard, Andrew ;
Zhu, Menglong ;
Zhmoginov, Andrey ;
Chen, Liang-Chieh .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4510-4520
[27]   Efficient approach for EEG-based emotion recognition [J].
Senguer, D. ;
Siuly, S. .
ELECTRONICS LETTERS, 2020, 56 (25) :1361-1364
[28]  
Sengur D., 2018, TURK J SCI TECHNOL, V13, P61
[29]  
SENGUR D, 2020, TURK J SCI TECHNOL, V15, P93
[30]  
Simonyan K, 2015, Arxiv, DOI arXiv:1409.1556