Research on emotion recognition methods based on multi-modal physiological signal feature fusion

被引:0
作者
Zhang, Zhiwen [1 ,2 ]
Yu, Naigong [1 ,2 ]
Bian, Yan [3 ,4 ]
Yan, Jinhan [1 ,2 ]
机构
[1] School of Information Science and Technology, Beijing University of Technology, 100124, Beijing
[2] Beijing Key Laboratory of Computational Intelligence and Intelligent Systems, 100124, Beijing
[3] School of Automation and Electrical Engineering, Tianjin Polytechnic Normal University, 300222, Tianjin
[4] Tianjin Key Laboratory of Information Sensing and Intelligent Control, 300222, Tianjin
来源
Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering | 2025年 / 42卷 / 01期
关键词
Electrodermal activity; Electroencephalogram; Electromyogram; Emotion; Feature fusion; Multi-modality;
D O I
10.7507/1001-5515.202401020
中图分类号
学科分类号
摘要
情绪分类识别是情感计算的关键领域,脑电等生理信号可精准反映情绪且难以伪装。现阶段,情绪识别在单模态信号特征提取和多模态信号整合方面存在局限。本研究收集了高兴、悲伤、恐惧情绪下的脑电(EEG)、肌电(EMG)、皮电(EDA)信号,采用基于特征权重融合的方法进行特种融合并用支持向量机(SVM)和极限学习机(ELM)进行分类。结果表明,融合权重为EEG 0.7、EMG 0.15、EDA 0.15时分类最准确,准确率SVM为80.19%,ELM为82.48%,比单独脑电信号分别提升了5.81%和2.95%。此研究为多模态生理信号情绪分类识别提供了方法支持。.; Emotion classification and recognition is a crucial area in emotional computing. Physiological signals, such as electroencephalogram (EEG), provide an accurate reflection of emotions and are difficult to disguise. However, emotion recognition still faces challenges in single-modal signal feature extraction and multi-modal signal integration. This study collected EEG, electromyogram (EMG), and electrodermal activity (EDA) signals from participants under three emotional states: happiness, sadness, and fear. A feature-weighted fusion method was applied for integrating the signals, and both support vector machine (SVM) and extreme learning machine (ELM) were used for classification. The results showed that the classification accuracy was highest when the fusion weights were set to EEG 0.7, EMG 0.15, and EDA 0.15, achieving accuracy rates of 80.19% and 82.48% for SVM and ELM, respectively. These rates represented an improvement of 5.81% and 2.95% compared to using EEG alone. This study offers methodological support for emotion classification and recognition using multi-modal physiological signals.
引用
收藏
页码:17 / 23
页数:6
相关论文
empty
未找到相关数据