Emotion recognition using hierarchical spatial-temporal learning transformer from regional to global brain

被引:4
|
作者
Cheng, Cheng [1 ]
Liu, Wenzhe [2 ]
Feng, Lin [1 ,3 ]
Jia, Ziyu [4 ]
机构
[1] Dalian Univ Technol, Dept Comp Sci & Technol, Dalian, Peoples R China
[2] Huzhou Univ, Sch Informat Engn, Huzhou, Peoples R China
[3] Dalian Minzu Univ, Sch Informat & Commun Engn, Dlian, Peoples R China
[4] Univ Chinese Acad Sci, Chinese Acad Sci, Brainnetome Ctr, Inst Automat, Beijing, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Emotion recognition; Electroencephalogram (EEG); Transformer; Spatiotemporal features; EEG; FUSION;
D O I
10.1016/j.neunet.2024.106624
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotion recognition is an essential but challenging task in human-computer interaction systems due to the distinctive spatial structures and dynamic temporal dependencies associated with each emotion. However, current approaches fail to accurately capture the intricate effects of electroencephalogram (EEG) signals across different brain regions on emotion recognition. Therefore, this paper designs a transformer-based method, denoted by R2G-STLT, which relies on a spatial-temporal transformer encoder with regional to global hierarchical learning that learns the representative spatiotemporal features from the electrode level to the brain-region level. The regional spatial-temporal transformer (RST-Trans) encoder is designed to obtain spatial information and context dependence at the electrode level aiming to learn the regional spatiotemporal features. Then, the global spatial-temporal transformer (GST-Trans) encoder is utilized to extract reliable global spatiotemporal features, reflecting the impact of various brain regions on emotion recognition tasks. Moreover, the multi-head attention mechanism is placed into the GST-Trans encoder to empower it to capture the longrange spatial-temporal information among the brain regions. Finally, subject-independent experiments are conducted on each frequency band of the DEAP, SEED, and SEED-IV datasets to assess the performance of the proposed model. Results indicate that the R2G-STLT model surpasses several state-of-the-art approaches.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Prompt Learning-based Pretrained Visual Transformer for Emotion Recognition Using Electroencephalogram
    Cen, Haoming
    Gu, Tingxuan
    Zhao, Qinglin
    2024 6TH INTERNATIONAL CONFERENCE ON DATA-DRIVEN OPTIMIZATION OF COMPLEX SYSTEMS, DOCS 2024, 2024, : 812 - 817
  • [42] Trajectory Prediction for Autonomous Driving Using Spatial-Temporal Graph Attention Transformer
    Zhang, Kunpeng
    Feng, Xiaoliang
    Wu, Lan
    He, Zhengbing
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (11) : 22343 - 22353
  • [43] ASTDF-Net: Attention-Based Spatial-Temporal Dual-Stream Fusion Network for EEG-Based Emotion Recognition
    Gong, Peiliang
    Jia, Ziyu
    Wang, Pengpai
    Zhou, Yueying
    Zhang, Daoqiang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 883 - 892
  • [44] UApredictor: Urban Anomaly Prediction from Spatial-Temporal Data using Graph Transformer Neural Network
    Bhumika
    Das, Debasis
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [45] Human Brain Waves Study Using EEG and Deep Learning for Emotion Recognition
    Priyadarshani, Muskan
    Kumar, Pushpendra
    Babulal, Kanojia Sindhuben
    Rajput, Dharmendra Singh
    Patel, Harshita
    IEEE ACCESS, 2024, 12 : 101842 - 101850
  • [46] Classifying ASD based on time-series fMRI using spatial-temporal transformer
    Deng, Xin
    Zhang, Jiahao
    Liu, Rui
    Liu, Ke
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 151
  • [47] Transformer-based Self-supervised Representation Learning for Emotion Recognition Using Bio-signal Feature Fusion
    Sawant, Shrutika S.
    Erick, F. X.
    Arora, Pulkit
    Pahl, Jaspar
    Foltyn, Andreas
    Holzer, Nina
    Gotz, Theresa
    2023 11TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS, ACIIW, 2023,
  • [48] Spatial-temporal features-based EEG emotion recognition using graph convolution network and long short-term memory
    Zheng, Fa
    Hu, Bin
    Zheng, Xiangwei
    Zhang, Yuang
    PHYSIOLOGICAL MEASUREMENT, 2023, 44 (06)
  • [49] DenseNet-Transformer: A deep learning method for spatial-temporal traffic prediction in optical fronthaul network
    Qin, Xin
    Zhu, Wenwu
    Hu, Qian
    Zhou, Zexi
    Ding, Yi
    Gao, Xia
    Gu, Rentao
    COMPUTER NETWORKS, 2024, 253
  • [50] A multiple frequency bands parallel spatial-temporal 3D deep residual learning framework for EEG-based emotion recognition
    Miao, Minmin
    Zheng, Longxin
    Xu, Baoguo
    Yang, Zhong
    Hu, Wenjun
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 79