SSTD: A Novel Spatio-Temporal Demographic Network for EEG-Based Emotion Recognition

被引:14
作者
Li, Rui [1 ]
Ren, Chao [1 ]
Li, Chen [1 ]
Zhao, Nan [1 ]
Lu, Dawei [2 ]
Zhang, Xiaowei [1 ]
机构
[1] Lanzhou Univ, Sch Informat Sci & Engn, Gansu Prov Key Lab Wearable Comp, Lanzhou 730000, Peoples R China
[2] Beijing Inst Technol, Inst Engn Med, Beijing 100811, Peoples R China
基金
中国国家自然科学基金;
关键词
Covariance matrices; Electroencephalography; Emotion recognition; Brain modeling; Feature extraction; Data preprocessing; Measurement; Demographic; electroencephalography (EEG); emotion recognition; gate recurrent unit (GRU); Riemannian manifold; spatio-temporal; symmetric positive definite matrix network (SPDNet); BRAIN-COMPUTER INTERFACES; PRIMER;
D O I
10.1109/TCSS.2022.3188891
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Emotion recognition is the key to making machines more intelligent. This study proposes a novel sing-link end-to-end spatio-temporal demographic network (SSTD) that fuses spatial, temporal, and demographic information for electroencephalography (EEG)-based emotion recognition. In the SSTD model, an adaptive time window using single-link hierarchical clustering based on Riemannian metrics was realized for data preprocessing to solve the problem of individual differences. Then, the preprocessed EEG data acted as a gate recurrent unit (GRU) network input to calculate high-level time-domain features. At the same time, the EEG covariance matrices were fed into the symmetric positive definite matrix network (SPDNet) to calculate high-level spatial features. Given the correlation between EEG signals and individual demographic information, gender and age factors were integrated into the spatio-temporal model, resulting in more effective high-level features for EEG-based emotion recognition. Finally, extensive comparative experiments were conducted on two public datasets: DEAP and DREAMER. The average accuracy of valence and arousal on the DEAP dataset are 68.28% and 71.48%, respectively. The average accuracy of valence and arousal on the DREAMER dataset are 76.81% and 81.64%, respectively. Experimental results show that the SSTD model has an excellent recognition performance.
引用
收藏
页码:376 / 387
页数:12
相关论文
共 49 条
  • [1] Deep Learning Techniques for Speech Emotion Recognition, from Databases to Models
    Abbaschian, Babak Joze
    Sierra-Sosa, Daniel
    Elmaghraby, Adel
    [J]. SENSORS, 2021, 21 (04) : 1 - 27
  • [2] Subject independent emotion recognition using EEG signals employing attention driven neural networks
    Arjun
    Rajpoot, Aniket Singh
    Panicker, Mahesh Raveendranatha
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 75
  • [3] Log-euclidean metrics for fast and simple calculus on diffusion tensors
    Arsigny, Vincent
    Fillard, Pierre
    Pennec, Xavier
    Ayache, Nicholas
    [J]. MAGNETIC RESONANCE IN MEDICINE, 2006, 56 (02) : 411 - 421
  • [4] Atanassov Atanas V., 2021, 2021 International Conference Automatics and Informatics (ICAI), P135, DOI 10.1109/ICAI52893.2021.9639829
  • [5] Multiclass Brain-Computer Interface Classification by Riemannian Geometry
    Barachant, Alexandre
    Bonnet, Stephane
    Congedo, Marco
    Jutten, Christian
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2012, 59 (04) : 920 - 928
  • [6] Chung J, 2014, CORR, P1
  • [7] A M/EEG-fMRI Fusion Primer: Resolving Human Brain Responses in Space and Time
    Cichy, Radoslaw M.
    Oliva, Aude
    [J]. NEURON, 2020, 107 (05) : 772 - 781
  • [8] Congedo M, 2017, BRAIN-COMPUT INTERFA, V4, P155, DOI 10.1080/2326263X.2017.1297192
  • [9] Generative Adversarial Networks An overview
    Creswell, Antonia
    White, Tom
    Dumoulin, Vincent
    Arulkumaran, Kai
    Sengupta, Biswa
    Bharath, Anil A.
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) : 53 - 65
  • [10] Electroencephalogram Emotion Recognition Based on Dispersion Entropy Feature Extraction Using Random Oversampling Imbalanced Data Processing
    Ding, Xue-Wen
    Liu, Zhen-Tao
    Li, Dan-Yun
    He, Yong
    Wu, Min
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (03) : 882 - 891