Electroencephalogram based face emotion recognition using multimodal fusion and 1-D convolution neural network (ID-CNN) classifier

被引:2
作者
Alotaibi, Youseef [1 ]
Vuyyuru, Veera Ankalu. [2 ]
机构
[1] Umm Al Qura Univ, Coll Comp & Informat Syst, Dept Comp Sci, Mecca 21955, Saudi Arabia
[2] Koneru Lakshmaiah Educ Fdn, Dept Comp Sci & Engn, Visakhapatnam 522502, Andhra Pradesh, India
来源
AIMS MATHEMATICS | 2023年 / 8卷 / 10期
关键词
electroencephalogram (EEG); emotion recognition; CNN; feature fusion; pre-processing; canonical correlation; FACIAL EXPRESSION; VIDEO; MODEL;
D O I
10.3934/math.20231169
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Recently, there has been increased interest in emotion recognition. It is widely utilised in many industries, including healthcare, education and human-computer interaction (HCI). Different emotions are frequently recognised using characteristics of human emotion. Multimodal emotion identification based on the fusion of several features is currently the subject of increasing amounts of research. In order to obtain a superior classification performance, this work offers a deep learning model for multimodal emotion identification based on the fusion of electroencephalogram (EEG) signals and facial expressions. First, the face features from the facial expressions are extracted using a pre-trained convolution neural network (CNN). In this article, we employ CNNs to acquire spatial features from the original EEG signals. These CNNs use both regional and global convolution kernels to learn the characteristics of the left and right hemisphere channels as well as all EEG channels. Exponential canonical correlation analysis (ECCA) is used to combine highly correlated data from facial video frames and EEG after extraction. The 1-D CNN classifier uses these combined features to identify emotions. In order to assess the effectiveness of the suggested model, this research ran tests on the DEAP dataset. It is found that Multi_Modal_1D-CNN achieves 98.9% of accuracy, 93.2% of precision, 89.3% of recall, 94.23% of F1-score and 7sec of processing time.
引用
收藏
页码:22984 / 23002
页数:19
相关论文
共 33 条
[1]   A New Meta-Heuristics Data Clustering Algorithm Based on Tabu Search and Adaptive Search Memory [J].
Alotaibi, Youseef .
SYMMETRY-BASEL, 2022, 14 (03)
[2]   Suggestion Mining from Opinionated Text of Big Social Media Data [J].
Alotaibi, Youseef ;
Malik, Muhammad Noman ;
Khan, Huma Hayat ;
Batool, Anab ;
ul Islam, Saif ;
Alsufyani, Abdulmajeed ;
Alghamdi, Saleh .
CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 68 (03) :3323-3338
[3]   Multimodal Attention Network for Continuous-Time Emotion Recognition Using Video and EEG Signals [J].
Choi, Dong Yoon ;
Kim, Deok-Hwan ;
Song, Byung Cheol .
IEEE ACCESS, 2020, 8 :203814-203826
[4]   Emotion recognition via facial expression and affective prosody in schizophrenia: A methodological review [J].
Edwards, J ;
Jackson, HJ ;
Pattison, PE .
CLINICAL PSYCHOLOGY REVIEW, 2002, 22 (06) :789-832
[5]   A survey of socially interactive robots [J].
Fong, T ;
Nourbakhsh, I ;
Dautenhahn, K .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2003, 42 (3-4) :143-166
[6]   Fruit Image Classification Using Deep Learning [J].
Gill, Harmandeep Singh ;
Khalaf, Osamah Ibrahim ;
Alotaibi, Youseef ;
Alghamdi, Saleh ;
Alassery, Fawaz .
CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (03) :5135-5150
[7]   Multi-Model CNN-RNN-LSTM Based Fruit Recognition and Classification [J].
Gill, Harmandeep Singh ;
Khalaf, Osamah Ibrahim ;
Alotaibi, Youseef ;
Alghamdi, Saleh ;
Alassery, Fawaz .
INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2022, 33 (01) :637-650
[8]   Evaluating a computational model of emotion [J].
Gratch, J ;
Marsella, S .
AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2005, 11 (01) :23-43
[9]  
Gudi A, 2015, IEEE INT CONF AUTOMA
[10]  
Gunes Hatice, 2011, Proceedings 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG 2011), P827, DOI 10.1109/FG.2011.5771357