Hybrid deep convolutional model-based emotion recognition using multiple physiological signals

被引:16
作者
Akbulut, Fatma Patlar [1 ]
机构
[1] Istanbul Kultur Univ, Dept Comp Engn, Istanbul, Turkey
关键词
Emotion recognition; deep learning; CNN; transfer learning; affective computing; EXPRESSION; SYSTEM; PERCEPTION; EXTRACTION; SERVICE;
D O I
10.1080/10255842.2022.2032682
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Emotion recognition has become increasingly utilized in the medical, advertising, and military domains. Recognizing the cues of emotion from human behaviors or physiological responses is encouraging for the research community. However, extracting true characteristics from sensor data to understand emotions can be challenging due to the complex nature of these signals. Therefore, advanced feature engineering techniques are required for accurate signal recognition. This study presents a hybrid affective model that employs a transfer learning approach for emotion classification using large-frame sensor signals which employ a genuine dataset of signal fusion gathered from 30 participants using wearable sensor systems interconnected with mobile devices. The proposed approach implements several learning algorithms such as Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN), and several other shallow methods on the sensor input to handle the requirements for the traditional feature extraction process. The findings reveal that the use of deep learning methods is satisfactory in affect recognition when a great number of frames is employed, and the proposed hybrid deep model outperforms traditional neural network (overall accuracy of 54%) and deep learning approaches (overall accuracy of 76%), with an average classification accuracy of 93%. This hybrid deep model also has a higher accuracy than our previously proposed statistical autoregressive hidden Markov model (AR-HMM) approach, with 88.6% accuracy. Accuracy assessment was performed by means of several statistics measures (accuracy, precision, recall, F-measure, and RMSE).
引用
收藏
页码:1678 / 1690
页数:13
相关论文
共 73 条
[71]   Learning Affective Features With a Hybrid Deep Model for Audio-Visual Emotion Recognition [J].
Zhang, Shiqing ;
Zhang, Shiliang ;
Huang, Tiejun ;
Gao, Wen ;
Tian, Qi .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) :3030-3043
[72]  
Zhang Y., 2021, IEEE T AFFECTIVE COM
[73]   Speech emotion recognition using deep 1D & 2D CNN LSTM networks [J].
Zhao, Jianfeng ;
Mao, Xia ;
Chen, Lijiang .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2019, 47 :312-323