Leveraging Tripartite Tier Convolutional Neural Network for Human Emotion Recognition: A Multimodal Data Approach

被引:1
作者
Dharmichand, Saisanthiya [1 ]
Perumal, Supraja [1 ]
机构
[1] SRM Inst Sci & Technol, Fac Engn & Technol, Dept Networking & Commun, Kattankulathur 603203, Tamil Nadu, India
关键词
tripartite tier convolutional neural network; classification; emotion recognition; cognitive approach; electroencephalogram; multimodal data; FACE;
D O I
10.18280/ts.400619
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the recent past, significant strides have been made in the field of deep learning and data fusion, enabling computers to comprehend, identify, and analyse human emotions with remarkable precision. However, reliance on external biological features for emotion recognition can be misleading, as individuals may consciously or unconsciously mask their true emotions. Consequently, an objective and reliable approach is sought, one that draws on physiological markers for emotion recognition. This paper introduces a novel model, the Tripartite Tier Convolutional Neural Network (TTCNN), specifically designed to leverage deep learning methods for the extraction and classification of significant features in multimodal emotion recognition. Amongst various physiological features, this study prioritizes eye movement and Electroencephalogram (EEG) data due to their robust potential to reflect emotional states. The multimodal data-based feature extraction facilitated by the TTCNN model yields a comprehensive set of features, enhancing the effectiveness of emotion classification into categories such as disgust, fear, sadness, happiness, and neutrality. This innovative cognitive approach has been evaluated using two established datasets, SEED and DEAP. The performance of the TTCNN model demonstrates its efficacy, achieving an impressive 95.84% classification accuracy on the SEED dataset and 87.01% on the DEAP dataset. These results significantly outperform existing state-of-the-art methods, underscoring the TTCNN model's potential as a robust tool for human emotion recognition. This research contributes to the advancement of computer-aided emotion analysis, presenting a significant step forward in the field and opening up potential applications in diverse areas such as psychology, healthcare, and human-computer interaction.
引用
收藏
页码:2565 / 2576
页数:12
相关论文
共 41 条
[1]   44 Performance Analysis of Various Feature Extraction Techniques in Ear Biometrics [J].
Annapurani, K. ;
Malathy, C. ;
Sadiq, A. K. .
PROCEEDINGS OF INTERNATIONAL CONFERENCE ON INTERNET COMPUTING AND INFORMATION COMMUNICATIONS (ICICIC GLOBAL 2012), 2014, 216 :415-420
[2]   EEG-Based Multi-Modal Emotion Recognition using Bag of Deep Features: An Optimal Feature Selection Approach [J].
Asghar, Muhammad Adeel ;
Khan, Muhammad Jamil ;
Fawad ;
Amin, Yasar ;
Rizwan, Muhammad ;
Rahman, MuhibUr ;
Badnava, Salman ;
Mirjavadi, Seyed Sajad .
SENSORS, 2019, 19 (23)
[3]   Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers [J].
Atkinson, John ;
Campos, Daniel .
EXPERT SYSTEMS WITH APPLICATIONS, 2016, 47 :35-41
[4]   Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning [J].
Bahreini, Kiavash ;
Nadolski, Rob ;
Westera, Wim .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2016, 32 (05) :415-430
[5]  
Bono V, 2016, 2016 3RD IEEE EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS, P200, DOI 10.1109/BHI.2016.7455869
[6]   Heart sound signals can be used for emotion recognition [J].
Cheng Xiefeng ;
Yue Wang ;
Shicheng Dai ;
Pengjun Zhao ;
Qifa Liu .
SCIENTIFIC REPORTS, 2019, 9 (1)
[7]   Analysis of functional brain connections for positive-negative emotions using phase locking value [J].
Dasdemir, Yasar ;
Yildirim, Esen ;
Yildirim, Serdar .
COGNITIVE NEURODYNAMICS, 2017, 11 (06) :487-500
[8]  
Fathima AA, 2015, 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS, VISION AND INFORMATION SECURITY (CGVIS), P220, DOI 10.1109/CGVIS.2015.7449925
[9]   Innovative Poincare's plot asymmetry descriptors for EEG emotion recognition [J].
Goshvarpour, Atefeh ;
Goshvarpour, Ateke .
COGNITIVE NEURODYNAMICS, 2022, 16 (03) :545-559
[10]   Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Networks [J].
Hasani, Behzad ;
Mahoor, Mohammad H. .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :2278-2288