Emotion recognition from multichannel EEG signals based on low-rank subspace self-representation features

被引:2
作者
Gao, Yunyuan [1 ]
Xue, Yunfeng [1 ]
Gao, Jian [2 ]
机构
[1] Hangzhou Dianzi Univ, Coll Automat, Hangzhou, Peoples R China
[2] Hangzhou Mingzhou Naokang Rehabil Hosp, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Low rank subspace; RPCA; Feature extraction; Tucker dimensionality reduction; Emotion recognition;
D O I
10.1016/j.bspc.2024.106877
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
In recent years, emotion recognition based on electroencephalogram (EEG) has become the research focus in human-computer interaction (HCI), but deficiencies in EEG feature extraction and noise suppression are still challenging. In this paper, a novel robust low-rank subspace self-representation (RLSR) of EEG is developed for emotion recognition. Instead of using classical time-frequency EEG feature, the data-driven based EEG self- representation in low-rank subspace is extracted for emotion characterization. The Robust Principal Component Analysis (RPCA) is incorporated to separate the noise part in the process of solving self-representation. The accuracy and robustness of the result are improved because of the superior features and noise suppression. To fully exploit the effective knowledge of different EEG frequency bands, the Tucker decomposition based data dimensionality reduction is introduced. Experiments conducted on the public dataset DEAP reveal that the average accuracies of the proposed method can reach to 93.04% and 93.13% for binary classification of valence and arousal, respectively. The average accuracy reaches to 88.82 % of four-class classification.
引用
收藏
页数:9
相关论文
共 29 条
  • [1] An Efficient LSTM Network for Emotion Recognition From Multichannel EEG Signals
    Du, Xiaobing
    Ma, Cuixia
    Zhang, Guanhua
    Li, Jinyao
    Lai, Yu-Kun
    Zhao, Guozhen
    Deng, Xiaoming
    Liu, Yong-Jin
    Wang, Hongan
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (03) : 1528 - 1540
  • [2] Local domain generalization with low-rank constraint for EEG-based emotion recognition
    Tao, Jianwen
    Dan, Yufang
    Zhou, Di
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [3] Face Recognition Based on Low-Rank Matrix Representation
    Nguyen Hoang Vu
    Huang Rong
    Yang Wankou
    Sun Changyin
    2014 33RD CHINESE CONTROL CONFERENCE (CCC), 2014, : 4647 - 4652
  • [4] A Feature-Fused Convolutional Neural Network for Emotion Recognition From Multichannel EEG Signals
    Yao, Qunli
    Gu, Heng
    Wang, Shaodi
    Li, Xiaoli
    IEEE SENSORS JOURNAL, 2022, 22 (12) : 11954 - 11964
  • [5] SUBSPACE CLUSTERING AND FEATURE EXTRACTION BASED ON LATENT SPARSE LOW-RANK REPRESENTATION
    Zhao, Li-Na
    Ma, Fang
    Yang, Hong-Wei
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), 2019, : 95 - 100
  • [6] Joint low-rank tensor fusion and cross-modal attention for multimodal physiological signals based emotion recognition
    Wan, Xin
    Wang, Yongxiong
    Wang, Zhe
    Tang, Yiheng
    Liu, Benke
    PHYSIOLOGICAL MEASUREMENT, 2024, 45 (07)
  • [7] Emotion Recognition Using Three-Dimensional Feature and Convolutional Neural Network from Multichannel EEG Signals
    Chao, Hao
    Dong, Liang
    IEEE SENSORS JOURNAL, 2021, 21 (02) : 2024 - 2034
  • [8] Cross-Subject Emotion Recognition From Multichannel EEG Signals Using Multivariate Decomposition and Ensemble Learning
    Vempati, Raveendrababu
    Sharma, Lakhan Dev
    Tripathy, Rajesh Kumar
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2025, 17 (01) : 77 - 88
  • [9] Face recognition technology based on low-rank joint sparse representation algorithm
    Wang, Hongsheng
    Cai, Jingjing
    JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING, 2023, 23 (04) : 2045 - 2058
  • [10] Emotional State Classification from MUSIC-Based Features of Multichannel EEG Signals
    Hossain, Sakib Abrar
    Rahman, Md. Asadur
    Chakrabarty, Amitabha
    Rashid, Mohd Abdur
    Kuwana, Anna
    Kobayashi, Haruo
    BIOENGINEERING-BASEL, 2023, 10 (01):